mirror of
https://github.com/bytedance/deer-flow.git
synced 2026-04-25 11:18:22 +00:00
refactor(config): Phase 2 final cleanup — delete AppConfig.current() shim
Tail-end of Phase 2: - Migrate ~70 remaining test sites off AppConfig.current(): drop dead monkey-patches (production no longer calls current), hoist the mocked config into a local variable and pass it explicitly. Verified with `grep -rn 'AppConfig\.current' backend/tests` → empty. - Delete the AppConfig.current() classmethod entirely. The transitional raise-only shim is no longer needed now that no test references it. - Update docs: plan marked shipped (P2-6..P2-10 in commit 84dccef2); backend/CLAUDE.md Config Lifecycle rewritten to describe the explicit-parameter design; gateway/deps.py docstrings no longer point at the removed current() surface. AppConfig is now a pure Pydantic value object. Every consumer holds its own captured instance — Gateway (app.state.config via Depends(get_config)), DeerFlowClient (self._app_config), agent runtime (DeerFlowContext.app_config), LangGraph Server bootstrap (AppConfig.from_file() inside make_lead_agent). 2337 non-e2e tests pass.
This commit is contained in:
parent
84dccef230
commit
6beea682d2
@ -130,7 +130,7 @@ from app.gateway.app import app
|
||||
from app.channels.service import start_channel_service
|
||||
|
||||
# App → Harness (allowed)
|
||||
from deerflow.config import get_app_config
|
||||
from deerflow.config.app_config import AppConfig
|
||||
|
||||
# Harness → App (FORBIDDEN — enforced by test_harness_boundary.py)
|
||||
# from app.gateway.routers.uploads import ... # ← will fail CI
|
||||
@ -179,9 +179,16 @@ Setup: Copy `config.example.yaml` to `config.yaml` in the **project root** direc
|
||||
|
||||
**Config Versioning**: `config.example.yaml` has a `config_version` field. On startup, `AppConfig.from_file()` compares user version vs example version and emits a warning if outdated. Missing `config_version` = version 0. Run `make config-upgrade` to auto-merge missing fields. When changing the config schema, bump `config_version` in `config.example.yaml`.
|
||||
|
||||
**Config Lifecycle**: All config models are `frozen=True` (immutable after construction). `AppConfig.from_file()` is a pure function — no side effects on sub-module globals. `get_app_config()` is backed by a single `ContextVar`, set once via `init_app_config()` at process startup. To update config at runtime (e.g., Gateway API updates MCP/Skills), construct a new `AppConfig.from_file()` and call `init_app_config()` again. No mtime detection, no auto-reload.
|
||||
**Config Lifecycle**: All config models are `frozen=True` (immutable after construction). `AppConfig.from_file()` is a pure function — no side effects, no process-global state. The resolved `AppConfig` is passed as an explicit parameter down every consumer lane:
|
||||
|
||||
**DeerFlowContext**: Per-invocation typed context for the agent execution path, injected via LangGraph `Runtime[DeerFlowContext]`. Holds `app_config: AppConfig`, `thread_id: str`, `agent_name: str | None`. Gateway runtime and `DeerFlowClient` construct full `DeerFlowContext` at invoke time; LangGraph Server path uses a fallback via `resolve_context()`. Middleware and tools access context through `resolve_context(runtime)` which returns a typed `DeerFlowContext` regardless of entry point. Mutable runtime state (`sandbox_id`) flows through `ThreadState.sandbox`, not context.
|
||||
- **Gateway**: `app.state.config` populated in lifespan; routers receive it via `Depends(get_config)` from `app/gateway/deps.py`.
|
||||
- **Client**: `DeerFlowClient._app_config` captured in the constructor; every method reads `self._app_config`.
|
||||
- **Agent run**: wrapped in `DeerFlowContext(app_config=…)` and injected via LangGraph `Runtime[DeerFlowContext].context`. Middleware and tools read `runtime.context.app_config` directly or via `resolve_context(runtime)`.
|
||||
- **LangGraph Server bootstrap**: `make_lead_agent` (registered in `langgraph.json`) calls `AppConfig.from_file()` itself — the only place in production that loads from disk at agent-build time.
|
||||
|
||||
To update config at runtime (Gateway API mutations for MCP/Skills), write the new file and call `AppConfig.from_file()` to build a fresh snapshot, then swap `app.state.config`. No mtime detection, no auto-reload, no ambient ContextVar lookup (`AppConfig.current()` has been removed).
|
||||
|
||||
**DeerFlowContext**: Per-invocation typed context for the agent execution path, injected via LangGraph `Runtime[DeerFlowContext]`. Holds `app_config: AppConfig`, `thread_id: str`, `agent_name: str | None`. Gateway runtime and `DeerFlowClient` construct full `DeerFlowContext` at invoke time; the LangGraph Server boundary builds one inside `make_lead_agent`. Middleware and tools access context through `resolve_context(runtime)` which returns the typed `DeerFlowContext` — legacy dict/None shapes are rejected. Mutable runtime state (`sandbox_id`) flows through `ThreadState.sandbox`, not context.
|
||||
|
||||
Configuration priority:
|
||||
1. Explicit `config_path` argument
|
||||
|
||||
@ -26,11 +26,9 @@ if TYPE_CHECKING:
|
||||
def get_config(request: Request) -> AppConfig:
|
||||
"""FastAPI dependency returning the app-scoped ``AppConfig``.
|
||||
|
||||
Prefer this over ``AppConfig.current()`` in new code. Reads from
|
||||
``request.app.state.config`` which is set at startup (``app.py``
|
||||
lifespan) and swapped on config reload (``routers/mcp.py``,
|
||||
``routers/skills.py``). Phase 2 of the config refactor migrates all
|
||||
router-level ``AppConfig.current()`` callers to this dependency.
|
||||
Reads from ``request.app.state.config`` which is set at startup
|
||||
(``app.py`` lifespan) and swapped on config reload (``routers/mcp.py``,
|
||||
``routers/skills.py``).
|
||||
"""
|
||||
cfg = getattr(request.app.state, "config", None)
|
||||
if cfg is None:
|
||||
@ -53,8 +51,8 @@ async def langgraph_runtime(app: FastAPI) -> AsyncGenerator[None, None]:
|
||||
from deerflow.runtime.events.store import make_run_event_store
|
||||
|
||||
async with AsyncExitStack() as stack:
|
||||
# app.state.config is populated earlier in lifespan(); thread it into
|
||||
# every provider that used to reach for AppConfig.current().
|
||||
# app.state.config is populated earlier in lifespan(); thread it
|
||||
# explicitly into every provider below.
|
||||
config = app.state.config
|
||||
|
||||
app.state.stream_bridge = await stack.enter_async_context(make_stream_bridge(config))
|
||||
|
||||
@ -222,20 +222,7 @@ class AppConfig(BaseModel):
|
||||
return next((group for group in self.tool_groups if group.name == name), None)
|
||||
|
||||
# AppConfig is a pure value object: construct with ``from_file()``, pass around.
|
||||
# Composition roots that hold the singleton:
|
||||
# Composition roots that hold the resolved instance:
|
||||
# - Gateway: ``app.state.config`` via ``Depends(get_config)``
|
||||
# - Client: ``DeerFlowClient._app_config``
|
||||
# - Agent run: ``Runtime[DeerFlowContext].context.app_config``
|
||||
#
|
||||
# ``current()`` is kept as a deprecated no-op slot purely so legacy tests
|
||||
# that still run ``patch.object(AppConfig, "current", ...)`` can attach
|
||||
# without an ``AttributeError`` at teardown. Production code never calls
|
||||
# it — any in-process invocation raises so regressions are loud.
|
||||
|
||||
@classmethod
|
||||
def current(cls) -> AppConfig:
|
||||
raise RuntimeError(
|
||||
"AppConfig.current() is removed. Pass AppConfig explicitly: "
|
||||
"`runtime.context.app_config` in agent paths, `Depends(get_config)` in Gateway, "
|
||||
"`self._app_config` in DeerFlowClient."
|
||||
)
|
||||
|
||||
@ -63,77 +63,54 @@ class TestGetCheckpointer:
|
||||
"""get_checkpointer should return InMemorySaver when not configured."""
|
||||
from langgraph.checkpoint.memory import InMemorySaver
|
||||
|
||||
with patch.object(AppConfig, "current", return_value=_make_config()):
|
||||
cp = get_checkpointer(AppConfig.current())
|
||||
cfg = _make_config()
|
||||
cp = get_checkpointer(cfg)
|
||||
assert cp is not None
|
||||
assert isinstance(cp, InMemorySaver)
|
||||
|
||||
def test_raises_when_config_file_missing(self):
|
||||
"""A missing config.yaml is a deployment error, not a cue to degrade to InMemorySaver.
|
||||
|
||||
Silent degradation would drop persistent-run state on process restart.
|
||||
`get_checkpointer` only falls back to InMemorySaver for the explicit
|
||||
`checkpointer: null` opt-in (test above), not for I/O failure.
|
||||
"""
|
||||
with patch.object(AppConfig, "current", side_effect=FileNotFoundError):
|
||||
with pytest.raises(FileNotFoundError):
|
||||
get_checkpointer(AppConfig.current())
|
||||
|
||||
def test_memory_returns_in_memory_saver(self):
|
||||
from langgraph.checkpoint.memory import InMemorySaver
|
||||
|
||||
cfg = _make_config(CheckpointerConfig(type="memory"))
|
||||
with patch.object(AppConfig, "current", return_value=cfg):
|
||||
cp = get_checkpointer(AppConfig.current())
|
||||
cp = get_checkpointer(cfg)
|
||||
assert isinstance(cp, InMemorySaver)
|
||||
|
||||
def test_memory_singleton(self):
|
||||
cfg = _make_config(CheckpointerConfig(type="memory"))
|
||||
with patch.object(AppConfig, "current", return_value=cfg):
|
||||
cp1 = get_checkpointer(AppConfig.current())
|
||||
cp2 = get_checkpointer(AppConfig.current())
|
||||
cp1 = get_checkpointer(cfg)
|
||||
cp2 = get_checkpointer(cfg)
|
||||
assert cp1 is cp2
|
||||
|
||||
def test_reset_clears_singleton(self):
|
||||
cfg = _make_config(CheckpointerConfig(type="memory"))
|
||||
with patch.object(AppConfig, "current", return_value=cfg):
|
||||
cp1 = get_checkpointer(AppConfig.current())
|
||||
reset_checkpointer()
|
||||
cp2 = get_checkpointer(AppConfig.current())
|
||||
cp1 = get_checkpointer(cfg)
|
||||
reset_checkpointer()
|
||||
cp2 = get_checkpointer(cfg)
|
||||
assert cp1 is not cp2
|
||||
|
||||
def test_sqlite_raises_when_package_missing(self):
|
||||
cfg = _make_config(CheckpointerConfig(type="sqlite", connection_string="/tmp/test.db"))
|
||||
with (
|
||||
patch.object(AppConfig, "current", return_value=cfg),
|
||||
patch.dict(sys.modules, {"langgraph.checkpoint.sqlite": None}),
|
||||
):
|
||||
with patch.dict(sys.modules, {"langgraph.checkpoint.sqlite": None}):
|
||||
reset_checkpointer()
|
||||
with pytest.raises(ImportError, match="langgraph-checkpoint-sqlite"):
|
||||
get_checkpointer(AppConfig.current())
|
||||
get_checkpointer(cfg)
|
||||
|
||||
def test_postgres_raises_when_package_missing(self):
|
||||
cfg = _make_config(CheckpointerConfig(type="postgres", connection_string="postgresql://localhost/db"))
|
||||
with (
|
||||
patch.object(AppConfig, "current", return_value=cfg),
|
||||
patch.dict(sys.modules, {"langgraph.checkpoint.postgres": None}),
|
||||
):
|
||||
with patch.dict(sys.modules, {"langgraph.checkpoint.postgres": None}):
|
||||
reset_checkpointer()
|
||||
with pytest.raises(ImportError, match="langgraph-checkpoint-postgres"):
|
||||
get_checkpointer(AppConfig.current())
|
||||
get_checkpointer(cfg)
|
||||
|
||||
def test_postgres_raises_when_connection_string_missing(self):
|
||||
cfg = _make_config(CheckpointerConfig(type="postgres"))
|
||||
mock_saver = MagicMock()
|
||||
mock_module = MagicMock()
|
||||
mock_module.PostgresSaver = mock_saver
|
||||
with (
|
||||
patch.object(AppConfig, "current", return_value=cfg),
|
||||
patch.dict(sys.modules, {"langgraph.checkpoint.postgres": mock_module}),
|
||||
):
|
||||
with patch.dict(sys.modules, {"langgraph.checkpoint.postgres": mock_module}):
|
||||
reset_checkpointer()
|
||||
with pytest.raises(ValueError, match="connection_string is required"):
|
||||
get_checkpointer(AppConfig.current())
|
||||
get_checkpointer(cfg)
|
||||
|
||||
def test_sqlite_creates_saver(self):
|
||||
"""SQLite checkpointer is created when package is available."""
|
||||
@ -150,12 +127,9 @@ class TestGetCheckpointer:
|
||||
mock_module = MagicMock()
|
||||
mock_module.SqliteSaver = mock_saver_cls
|
||||
|
||||
with (
|
||||
patch.object(AppConfig, "current", return_value=cfg),
|
||||
patch.dict(sys.modules, {"langgraph.checkpoint.sqlite": mock_module}),
|
||||
):
|
||||
with patch.dict(sys.modules, {"langgraph.checkpoint.sqlite": mock_module}):
|
||||
reset_checkpointer()
|
||||
cp = get_checkpointer(AppConfig.current())
|
||||
cp = get_checkpointer(cfg)
|
||||
|
||||
assert cp is mock_saver_instance
|
||||
mock_saver_cls.from_conn_string.assert_called_once()
|
||||
@ -176,12 +150,9 @@ class TestGetCheckpointer:
|
||||
mock_pg_module = MagicMock()
|
||||
mock_pg_module.PostgresSaver = mock_saver_cls
|
||||
|
||||
with (
|
||||
patch.object(AppConfig, "current", return_value=cfg),
|
||||
patch.dict(sys.modules, {"langgraph.checkpoint.postgres": mock_pg_module}),
|
||||
):
|
||||
with patch.dict(sys.modules, {"langgraph.checkpoint.postgres": mock_pg_module}):
|
||||
reset_checkpointer()
|
||||
cp = get_checkpointer(AppConfig.current())
|
||||
cp = get_checkpointer(cfg)
|
||||
|
||||
assert cp is mock_saver_instance
|
||||
mock_saver_cls.from_conn_string.assert_called_once_with("postgresql://localhost/db")
|
||||
@ -209,7 +180,6 @@ class TestAsyncCheckpointer:
|
||||
mock_module.AsyncSqliteSaver = mock_saver_cls
|
||||
|
||||
with (
|
||||
patch.object(AppConfig, "current", return_value=mock_config),
|
||||
patch.dict(sys.modules, {"langgraph.checkpoint.sqlite.aio": mock_module}),
|
||||
patch("deerflow.runtime.checkpointer.async_provider.asyncio.to_thread", new_callable=AsyncMock) as mock_to_thread,
|
||||
patch(
|
||||
@ -217,7 +187,7 @@ class TestAsyncCheckpointer:
|
||||
return_value="/tmp/resolved/test.db",
|
||||
),
|
||||
):
|
||||
async with make_checkpointer(AppConfig.current()) as saver:
|
||||
async with make_checkpointer(mock_config) as saver:
|
||||
assert saver is mock_saver
|
||||
|
||||
mock_to_thread.assert_awaited_once()
|
||||
@ -248,7 +218,7 @@ class TestAppConfigLoadsCheckpointer:
|
||||
|
||||
class TestClientCheckpointerFallback:
|
||||
def test_client_uses_config_checkpointer_when_none_provided(self):
|
||||
"""DeerFlowClient._ensure_agent falls back to get_checkpointer(AppConfig.current()) when checkpointer=None."""
|
||||
"""DeerFlowClient._ensure_agent falls back to get_checkpointer(app_config) when checkpointer=None."""
|
||||
# This is a structural test — verifying the fallback path exists.
|
||||
cfg = _make_config(CheckpointerConfig(type="memory"))
|
||||
assert cfg.checkpointer is not None
|
||||
|
||||
@ -1,12 +1,10 @@
|
||||
"""Test for issue #1016: checkpointer should not return None."""
|
||||
|
||||
from unittest.mock import MagicMock, patch
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
import pytest
|
||||
from langgraph.checkpoint.memory import InMemorySaver
|
||||
|
||||
from deerflow.config.app_config import AppConfig
|
||||
|
||||
|
||||
class TestCheckpointerNoneFix:
|
||||
"""Tests that checkpointer context managers return InMemorySaver instead of None."""
|
||||
@ -16,42 +14,38 @@ class TestCheckpointerNoneFix:
|
||||
"""make_checkpointer should return InMemorySaver when config.checkpointer is None."""
|
||||
from deerflow.runtime.checkpointer.async_provider import make_checkpointer
|
||||
|
||||
# Mock AppConfig.current to return a config with checkpointer=None and database=None
|
||||
mock_config = MagicMock()
|
||||
mock_config.checkpointer = None
|
||||
mock_config.database = None
|
||||
|
||||
with patch.object(AppConfig, "current", return_value=mock_config):
|
||||
async with make_checkpointer(AppConfig.current()) as checkpointer:
|
||||
# Should return InMemorySaver, not None
|
||||
assert checkpointer is not None
|
||||
assert isinstance(checkpointer, InMemorySaver)
|
||||
async with make_checkpointer(mock_config) as checkpointer:
|
||||
# Should return InMemorySaver, not None
|
||||
assert checkpointer is not None
|
||||
assert isinstance(checkpointer, InMemorySaver)
|
||||
|
||||
# Should be able to call alist() without AttributeError
|
||||
# This is what LangGraph does and what was failing in issue #1016
|
||||
result = []
|
||||
async for item in checkpointer.alist(config={"configurable": {"thread_id": "test"}}):
|
||||
result.append(item)
|
||||
# Should be able to call alist() without AttributeError
|
||||
# This is what LangGraph does and what was failing in issue #1016
|
||||
result = []
|
||||
async for item in checkpointer.alist(config={"configurable": {"thread_id": "test"}}):
|
||||
result.append(item)
|
||||
|
||||
# Empty list is expected for a fresh checkpointer
|
||||
assert result == []
|
||||
# Empty list is expected for a fresh checkpointer
|
||||
assert result == []
|
||||
|
||||
def test_sync_checkpointer_context_returns_in_memory_saver_when_not_configured(self):
|
||||
"""checkpointer_context should return InMemorySaver when config.checkpointer is None."""
|
||||
from deerflow.runtime.checkpointer.provider import checkpointer_context
|
||||
|
||||
# Mock AppConfig.get to return a config with checkpointer=None
|
||||
mock_config = MagicMock()
|
||||
mock_config.checkpointer = None
|
||||
|
||||
with patch.object(AppConfig, "current", return_value=mock_config):
|
||||
with checkpointer_context(AppConfig.current()) as checkpointer:
|
||||
# Should return InMemorySaver, not None
|
||||
assert checkpointer is not None
|
||||
assert isinstance(checkpointer, InMemorySaver)
|
||||
with checkpointer_context(mock_config) as checkpointer:
|
||||
# Should return InMemorySaver, not None
|
||||
assert checkpointer is not None
|
||||
assert isinstance(checkpointer, InMemorySaver)
|
||||
|
||||
# Should be able to call list() without AttributeError
|
||||
result = list(checkpointer.list(config={"configurable": {"thread_id": "test"}}))
|
||||
# Should be able to call list() without AttributeError
|
||||
result = list(checkpointer.list(config={"configurable": {"thread_id": "test"}}))
|
||||
|
||||
# Empty list is expected for a fresh checkpointer
|
||||
assert result == []
|
||||
# Empty list is expected for a fresh checkpointer
|
||||
assert result == []
|
||||
|
||||
@ -70,8 +70,7 @@ class TestClientInit:
|
||||
|
||||
def test_custom_params(self, mock_app_config):
|
||||
mock_middleware = MagicMock()
|
||||
with patch.object(AppConfig, "current", return_value=mock_app_config):
|
||||
c = DeerFlowClient(model_name="gpt-4", thinking_enabled=False, subagent_enabled=True, plan_mode=True, agent_name="test-agent", available_skills={"skill1", "skill2"}, middlewares=[mock_middleware])
|
||||
c = DeerFlowClient(model_name="gpt-4", thinking_enabled=False, subagent_enabled=True, plan_mode=True, agent_name="test-agent", available_skills={"skill1", "skill2"}, middlewares=[mock_middleware])
|
||||
assert c._model_name == "gpt-4"
|
||||
assert c._thinking_enabled is False
|
||||
assert c._subagent_enabled is True
|
||||
@ -81,11 +80,10 @@ class TestClientInit:
|
||||
assert c._middlewares == [mock_middleware]
|
||||
|
||||
def test_invalid_agent_name(self, mock_app_config):
|
||||
with patch.object(AppConfig, "current", return_value=mock_app_config):
|
||||
with pytest.raises(ValueError, match="Invalid agent name"):
|
||||
DeerFlowClient(agent_name="invalid name with spaces!")
|
||||
with pytest.raises(ValueError, match="Invalid agent name"):
|
||||
DeerFlowClient(agent_name="../path/traversal")
|
||||
with pytest.raises(ValueError, match="Invalid agent name"):
|
||||
DeerFlowClient(agent_name="invalid name with spaces!")
|
||||
with pytest.raises(ValueError, match="Invalid agent name"):
|
||||
DeerFlowClient(agent_name="../path/traversal")
|
||||
|
||||
def test_custom_config_path(self, mock_app_config):
|
||||
# rather than touching AppConfig.init() / process-global state.
|
||||
@ -96,8 +94,7 @@ class TestClientInit:
|
||||
|
||||
def test_checkpointer_stored(self, mock_app_config):
|
||||
cp = MagicMock()
|
||||
with patch.object(AppConfig, "current", return_value=mock_app_config):
|
||||
c = DeerFlowClient(checkpointer=cp)
|
||||
c = DeerFlowClient(checkpointer=cp)
|
||||
assert c._checkpointer is cp
|
||||
|
||||
|
||||
@ -1840,7 +1837,6 @@ class TestScenarioConfigManagement:
|
||||
with (
|
||||
patch("deerflow.skills.loader.load_skills", side_effect=[[skill], [toggled]]),
|
||||
patch("deerflow.client.ExtensionsConfig.resolve_config_path", return_value=config_file),
|
||||
patch.object(AppConfig, "current", return_value=MagicMock(extensions=ext_config)),
|
||||
patch("deerflow.config.app_config.AppConfig.from_file", return_value=MagicMock()),
|
||||
):
|
||||
skill_result = client.update_skill("code-gen", enabled=False)
|
||||
@ -2093,7 +2089,6 @@ class TestScenarioSkillInstallAndUse:
|
||||
with (
|
||||
patch("deerflow.skills.loader.load_skills", side_effect=[[installed_skill], [disabled_skill]]),
|
||||
patch("deerflow.client.ExtensionsConfig.resolve_config_path", return_value=config_file),
|
||||
patch.object(AppConfig, "current", return_value=MagicMock(extensions=ext_config)),
|
||||
patch("deerflow.config.app_config.AppConfig.from_file", return_value=MagicMock()),
|
||||
):
|
||||
toggled = client.update_skill("my-analyzer", enabled=False)
|
||||
@ -2700,7 +2695,6 @@ class TestConfigUpdateErrors:
|
||||
with (
|
||||
patch("deerflow.skills.loader.load_skills", side_effect=[[skill], []]),
|
||||
patch("deerflow.client.ExtensionsConfig.resolve_config_path", return_value=config_file),
|
||||
patch.object(AppConfig, "current", return_value=MagicMock(extensions=ext_config)),
|
||||
patch("deerflow.config.app_config.AppConfig.from_file", return_value=MagicMock()),
|
||||
):
|
||||
with pytest.raises(RuntimeError, match="disappeared"):
|
||||
@ -3115,7 +3109,6 @@ class TestBugAgentInvalidationInconsistency:
|
||||
with (
|
||||
patch("deerflow.skills.loader.load_skills", side_effect=[[skill], [updated]]),
|
||||
patch("deerflow.client.ExtensionsConfig.resolve_config_path", return_value=config_file),
|
||||
patch.object(AppConfig, "current", return_value=MagicMock(extensions=ext_config)),
|
||||
patch("deerflow.config.app_config.AppConfig.from_file", return_value=MagicMock()),
|
||||
):
|
||||
client.update_skill("s1", enabled=False)
|
||||
|
||||
@ -102,12 +102,7 @@ def e2e_env(tmp_path, monkeypatch):
|
||||
monkeypatch.setattr("deerflow.config.paths._paths", None)
|
||||
monkeypatch.setattr("deerflow.sandbox.sandbox_provider._default_sandbox_provider", None)
|
||||
|
||||
# 2. Inject a clean AppConfig via the ContextVar-backed singleton.
|
||||
# Title, memory, and summarization are disabled in _make_e2e_config().
|
||||
config = _make_e2e_config()
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: config))
|
||||
|
||||
# 3. Exclude TitleMiddleware from the chain.
|
||||
# 2. Exclude TitleMiddleware from the chain.
|
||||
# It triggers an extra LLM call to generate a thread title, which adds
|
||||
# non-determinism and cost to E2E tests (title generation is already
|
||||
# disabled via TitleConfig above, but the middleware still participates
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
"""Multi-client isolation regression test.
|
||||
|
||||
Phase 2 Task P2-3: ``DeerFlowClient`` now captures its ``AppConfig`` in the
|
||||
constructor instead of going through process-global ``AppConfig.current()``.
|
||||
constructor instead of going through a process-global config.
|
||||
This test pins the resulting invariant: two clients with different configs
|
||||
can coexist without contending over shared state.
|
||||
|
||||
|
||||
@ -3,13 +3,12 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
from unittest.mock import MagicMock, patch
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
import yaml
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
from deerflow.config.app_config import AppConfig
|
||||
from deerflow.config.memory_config import MemoryConfig
|
||||
|
||||
_TEST_MEMORY_CONFIG = MemoryConfig()
|
||||
@ -332,12 +331,8 @@ class TestMemoryFilePath:
|
||||
def test_global_memory_path(self, tmp_path):
|
||||
"""None agent_name should return global memory file."""
|
||||
from deerflow.agents.memory.storage import FileMemoryStorage
|
||||
from deerflow.config.memory_config import MemoryConfig
|
||||
|
||||
with (
|
||||
patch("deerflow.agents.memory.storage.get_paths", return_value=_make_paths(tmp_path)),
|
||||
patch.object(AppConfig, "current", return_value=MagicMock(memory=MemoryConfig(storage_path=""))),
|
||||
):
|
||||
with patch("deerflow.agents.memory.storage.get_paths", return_value=_make_paths(tmp_path)):
|
||||
storage = FileMemoryStorage(_TEST_MEMORY_CONFIG)
|
||||
path = storage._get_memory_file_path(None)
|
||||
assert path == tmp_path / "memory.json"
|
||||
@ -345,24 +340,16 @@ class TestMemoryFilePath:
|
||||
def test_agent_memory_path(self, tmp_path):
|
||||
"""Providing agent_name should return per-agent memory file."""
|
||||
from deerflow.agents.memory.storage import FileMemoryStorage
|
||||
from deerflow.config.memory_config import MemoryConfig
|
||||
|
||||
with (
|
||||
patch("deerflow.agents.memory.storage.get_paths", return_value=_make_paths(tmp_path)),
|
||||
patch.object(AppConfig, "current", return_value=MagicMock(memory=MemoryConfig(storage_path=""))),
|
||||
):
|
||||
with patch("deerflow.agents.memory.storage.get_paths", return_value=_make_paths(tmp_path)):
|
||||
storage = FileMemoryStorage(_TEST_MEMORY_CONFIG)
|
||||
path = storage._get_memory_file_path("code-reviewer")
|
||||
assert path == tmp_path / "agents" / "code-reviewer" / "memory.json"
|
||||
|
||||
def test_different_paths_for_different_agents(self, tmp_path):
|
||||
from deerflow.agents.memory.storage import FileMemoryStorage
|
||||
from deerflow.config.memory_config import MemoryConfig
|
||||
|
||||
with (
|
||||
patch("deerflow.agents.memory.storage.get_paths", return_value=_make_paths(tmp_path)),
|
||||
patch.object(AppConfig, "current", return_value=MagicMock(memory=MemoryConfig(storage_path=""))),
|
||||
):
|
||||
with patch("deerflow.agents.memory.storage.get_paths", return_value=_make_paths(tmp_path)):
|
||||
storage = FileMemoryStorage(_TEST_MEMORY_CONFIG)
|
||||
path_global = storage._get_memory_file_path(None)
|
||||
path_a = storage._get_memory_file_path("agent-a")
|
||||
|
||||
@ -5,8 +5,6 @@ from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from deerflow.config.app_config import AppConfig
|
||||
|
||||
# --- Phase 2 test helper: injected runtime for community tools ---
|
||||
from types import SimpleNamespace as _P2NS
|
||||
from deerflow.config.app_config import AppConfig as _P2AppConfig
|
||||
@ -22,7 +20,7 @@ def _runtime_with_config(config):
|
||||
``DeerFlowContext`` is a frozen dataclass typed as ``AppConfig`` but
|
||||
dataclasses don't enforce the type at runtime — handing a Mock through
|
||||
lets tests exercise the tool's ``get_tool_config`` lookup without going
|
||||
via ``AppConfig.current``.
|
||||
through a process-global config.
|
||||
"""
|
||||
ctx = _P2Ctx.__new__(_P2Ctx)
|
||||
object.__setattr__(ctx, "app_config", config)
|
||||
@ -35,17 +33,8 @@ def _runtime_with_config(config):
|
||||
|
||||
@pytest.fixture
|
||||
def mock_app_config():
|
||||
"""Mock the app config to return tool configurations."""
|
||||
with patch.object(AppConfig, "current") as mock_config:
|
||||
tool_config = MagicMock()
|
||||
tool_config.model_extra = {
|
||||
"max_results": 5,
|
||||
"search_type": "auto",
|
||||
"contents_max_characters": 1000,
|
||||
"api_key": "test-api-key",
|
||||
}
|
||||
mock_config.return_value.get_tool_config.return_value = tool_config
|
||||
yield mock_config
|
||||
"""Fixture retained as a pass-through: tests inject config via runtime directly."""
|
||||
yield
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
|
||||
@ -1,9 +1,8 @@
|
||||
"""Tests for the FastAPI get_config dependency.
|
||||
|
||||
Phase 2 step 1: introduces the new explicit-config primitive that
|
||||
resolves ``AppConfig`` from ``request.app.state.config``. This coexists
|
||||
with the existing ``AppConfig.current()`` process-global during the
|
||||
migration; it becomes the sole mechanism after Phase 2 task P2-10.
|
||||
resolves ``AppConfig`` from ``request.app.state.config``. After migration,
|
||||
it is the sole mechanism.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
@ -334,8 +334,6 @@ class TestGuardrailsConfig:
|
||||
assert config.provider.config == {"denied_tools": ["bash"]}
|
||||
|
||||
def test_guardrails_config_via_app_config(self):
|
||||
from unittest.mock import patch
|
||||
|
||||
from deerflow.config.app_config import AppConfig
|
||||
from deerflow.config.guardrails_config import GuardrailProviderConfig, GuardrailsConfig
|
||||
from deerflow.config.sandbox_config import SandboxConfig
|
||||
@ -344,6 +342,5 @@ class TestGuardrailsConfig:
|
||||
sandbox=SandboxConfig(use="test"),
|
||||
guardrails=GuardrailsConfig(enabled=True, provider=GuardrailProviderConfig(use="test:Foo")),
|
||||
)
|
||||
with patch.object(AppConfig, "current", return_value=cfg):
|
||||
config = AppConfig.current().guardrails
|
||||
assert config.enabled is True
|
||||
config = cfg.guardrails
|
||||
assert config.enabled is True
|
||||
|
||||
@ -5,7 +5,6 @@ from unittest.mock import MagicMock, patch
|
||||
|
||||
from deerflow.community.infoquest import tools
|
||||
from deerflow.community.infoquest.infoquest_client import InfoQuestClient
|
||||
from deerflow.config.app_config import AppConfig
|
||||
|
||||
# --- Phase 2 test helper: injected runtime for community tools ---
|
||||
from types import SimpleNamespace as _P2NS
|
||||
@ -160,8 +159,7 @@ class TestInfoQuestClient:
|
||||
mock_get_client.assert_called_once()
|
||||
mock_client.fetch.assert_called_once_with("https://example.com")
|
||||
|
||||
@patch.object(AppConfig, "current")
|
||||
def test_get_infoquest_client(self, mock_get):
|
||||
def test_get_infoquest_client(self):
|
||||
"""Test _get_infoquest_client function with config."""
|
||||
mock_config = MagicMock()
|
||||
# Add image_search config to the side_effect
|
||||
@ -170,7 +168,6 @@ class TestInfoQuestClient:
|
||||
MagicMock(model_extra={"fetch_time": 10, "timeout": 30, "navigation_timeout": 60}), # web_fetch config
|
||||
MagicMock(model_extra={"image_search_time_range": 7, "image_size": "l"}), # image_search config
|
||||
]
|
||||
mock_get.return_value = mock_config
|
||||
|
||||
client = tools._get_infoquest_client(mock_config)
|
||||
|
||||
|
||||
@ -6,7 +6,6 @@ from types import SimpleNamespace
|
||||
import pytest
|
||||
|
||||
from deerflow.config.acp_config import ACPAgentConfig
|
||||
from deerflow.config.app_config import AppConfig
|
||||
from deerflow.config.extensions_config import ExtensionsConfig, McpServerConfig
|
||||
from deerflow.tools.builtins.invoke_acp_agent_tool import (
|
||||
_build_acp_mcp_servers,
|
||||
@ -679,11 +678,10 @@ def test_get_available_tools_includes_invoke_acp_agent_when_agents_configured(mo
|
||||
},
|
||||
get_model_config=lambda name: None,
|
||||
)
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: fake_config))
|
||||
monkeypatch.setattr(
|
||||
"deerflow.config.extensions_config.ExtensionsConfig.from_file",
|
||||
classmethod(lambda cls: ExtensionsConfig(mcp_servers={}, skills={})),
|
||||
)
|
||||
|
||||
tools = get_available_tools(include_mcp=True, subagent_enabled=False, app_config=AppConfig.current())
|
||||
tools = get_available_tools(include_mcp=True, subagent_enabled=False, app_config=fake_config)
|
||||
assert "invoke_acp_agent" in [tool.name for tool in tools]
|
||||
|
||||
@ -9,7 +9,6 @@ import pytest
|
||||
import deerflow.community.jina_ai.jina_client as jina_client_module
|
||||
from deerflow.community.jina_ai.jina_client import JinaClient
|
||||
from deerflow.community.jina_ai.tools import web_fetch_tool
|
||||
from deerflow.config.app_config import AppConfig
|
||||
|
||||
# --- Phase 2 test helper: injected runtime for community tools ---
|
||||
from types import SimpleNamespace as _P2NS
|
||||
@ -165,7 +164,6 @@ async def test_web_fetch_tool_returns_error_on_crawl_failure(monkeypatch):
|
||||
|
||||
mock_config = MagicMock()
|
||||
mock_config.get_tool_config.return_value = None
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: mock_config))
|
||||
monkeypatch.setattr(JinaClient, "crawl", mock_crawl)
|
||||
result = await web_fetch_tool.coroutine(url="https://example.com", runtime=_P2_RUNTIME)
|
||||
assert result.startswith("Error:")
|
||||
@ -181,7 +179,6 @@ async def test_web_fetch_tool_returns_markdown_on_success(monkeypatch):
|
||||
|
||||
mock_config = MagicMock()
|
||||
mock_config.get_tool_config.return_value = None
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: mock_config))
|
||||
monkeypatch.setattr(JinaClient, "crawl", mock_crawl)
|
||||
result = await web_fetch_tool.coroutine(url="https://example.com", runtime=_P2_RUNTIME)
|
||||
assert "Hello world" in result
|
||||
|
||||
@ -75,7 +75,6 @@ def test_make_lead_agent_disables_thinking_when_model_does_not_support_it(monkey
|
||||
|
||||
import deerflow.tools as tools_module
|
||||
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: app_config))
|
||||
monkeypatch.setattr(tools_module, "get_available_tools", lambda **kwargs: [])
|
||||
monkeypatch.setattr(lead_agent_module, "_build_middlewares", lambda app_config, config, model_name, agent_name=None: [])
|
||||
|
||||
@ -123,7 +122,6 @@ def test_build_middlewares_uses_resolved_model_name_for_vision(monkeypatch):
|
||||
]
|
||||
)
|
||||
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: app_config))
|
||||
monkeypatch.setattr(lead_agent_module, "_create_summarization_middleware", lambda _ac: None)
|
||||
monkeypatch.setattr(lead_agent_module, "_create_todo_list_middleware", lambda is_plan_mode: None)
|
||||
|
||||
@ -137,7 +135,6 @@ def test_build_middlewares_uses_resolved_model_name_for_vision(monkeypatch):
|
||||
def test_create_summarization_middleware_uses_configured_model_alias(monkeypatch):
|
||||
app_config = _make_app_config([_make_model("default", supports_thinking=False)])
|
||||
patched = app_config.model_copy(update={"summarization": SummarizationConfig(enabled=True, model_name="model-masswork")})
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: patched))
|
||||
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
|
||||
@ -3,7 +3,6 @@ from types import SimpleNamespace
|
||||
|
||||
from deerflow.agents.lead_agent.prompt import get_skills_prompt_section
|
||||
from deerflow.config.agents_config import AgentConfig
|
||||
from deerflow.config.app_config import AppConfig
|
||||
from deerflow.skills.types import Skill
|
||||
|
||||
|
||||
@ -105,7 +104,6 @@ def test_make_lead_agent_empty_skills_passed_correctly(monkeypatch):
|
||||
from deerflow.agents.lead_agent import agent as lead_agent_module
|
||||
|
||||
# Mock dependencies
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: MagicMock()))
|
||||
monkeypatch.setattr(lead_agent_module, "_resolve_model_name", lambda app_config=None, x=None: "default-model")
|
||||
monkeypatch.setattr(lead_agent_module, "create_chat_model", lambda **kwargs: "model")
|
||||
monkeypatch.setattr("deerflow.tools.get_available_tools", lambda **kwargs: [])
|
||||
@ -117,7 +115,6 @@ def test_make_lead_agent_empty_skills_passed_correctly(monkeypatch):
|
||||
|
||||
mock_app_config = MagicMock()
|
||||
mock_app_config.get_model_config.return_value = MockModelConfig()
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: mock_app_config))
|
||||
|
||||
captured_skills = []
|
||||
|
||||
|
||||
@ -1,6 +1,5 @@
|
||||
from types import SimpleNamespace
|
||||
|
||||
from deerflow.config.app_config import AppConfig
|
||||
from deerflow.tools.tools import get_available_tools
|
||||
|
||||
|
||||
@ -22,26 +21,26 @@ def _make_config(*, allow_host_bash: bool, sandbox_use: str = "deerflow.sandbox.
|
||||
|
||||
|
||||
def test_get_available_tools_hides_bash_for_default_local_sandbox(monkeypatch):
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: _make_config(allow_host_bash=False)))
|
||||
app_config = _make_config(allow_host_bash=False)
|
||||
monkeypatch.setattr(
|
||||
"deerflow.tools.tools.resolve_variable",
|
||||
lambda use, _: SimpleNamespace(name="bash" if "bash" in use else "ls"),
|
||||
)
|
||||
|
||||
names = [tool.name for tool in get_available_tools(include_mcp=False, subagent_enabled=False, app_config=AppConfig.current())]
|
||||
names = [tool.name for tool in get_available_tools(include_mcp=False, subagent_enabled=False, app_config=app_config)]
|
||||
|
||||
assert "bash" not in names
|
||||
assert "ls" in names
|
||||
|
||||
|
||||
def test_get_available_tools_keeps_bash_when_explicitly_enabled(monkeypatch):
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: _make_config(allow_host_bash=True)))
|
||||
app_config = _make_config(allow_host_bash=True)
|
||||
monkeypatch.setattr(
|
||||
"deerflow.tools.tools.resolve_variable",
|
||||
lambda use, _: SimpleNamespace(name="bash" if "bash" in use else "ls"),
|
||||
)
|
||||
|
||||
names = [tool.name for tool in get_available_tools(include_mcp=False, subagent_enabled=False, app_config=AppConfig.current())]
|
||||
names = [tool.name for tool in get_available_tools(include_mcp=False, subagent_enabled=False, app_config=app_config)]
|
||||
|
||||
assert "bash" in names
|
||||
assert "ls" in names
|
||||
@ -52,13 +51,12 @@ def test_get_available_tools_hides_renamed_host_bash_alias(monkeypatch):
|
||||
allow_host_bash=False,
|
||||
extra_tools=[SimpleNamespace(name="shell", group="bash", use="deerflow.sandbox.tools:bash_tool")],
|
||||
)
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: config))
|
||||
monkeypatch.setattr(
|
||||
"deerflow.tools.tools.resolve_variable",
|
||||
lambda use, _: SimpleNamespace(name="bash" if "bash_tool" in use else "ls"),
|
||||
)
|
||||
|
||||
names = [tool.name for tool in get_available_tools(include_mcp=False, subagent_enabled=False, app_config=AppConfig.current())]
|
||||
names = [tool.name for tool in get_available_tools(include_mcp=False, subagent_enabled=False, app_config=config)]
|
||||
|
||||
assert "bash" not in names
|
||||
assert "shell" not in names
|
||||
@ -70,13 +68,12 @@ def test_get_available_tools_keeps_bash_for_aio_sandbox(monkeypatch):
|
||||
allow_host_bash=False,
|
||||
sandbox_use="deerflow.community.aio_sandbox:AioSandboxProvider",
|
||||
)
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: config))
|
||||
monkeypatch.setattr(
|
||||
"deerflow.tools.tools.resolve_variable",
|
||||
lambda use, _: SimpleNamespace(name="bash" if "bash_tool" in use else "ls"),
|
||||
)
|
||||
|
||||
names = [tool.name for tool in get_available_tools(include_mcp=False, subagent_enabled=False, app_config=AppConfig.current())]
|
||||
names = [tool.name for tool in get_available_tools(include_mcp=False, subagent_enabled=False, app_config=config)]
|
||||
|
||||
assert "bash" in names
|
||||
assert "ls" in names
|
||||
|
||||
@ -1,10 +1,8 @@
|
||||
import errno
|
||||
from types import SimpleNamespace
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from deerflow.config.app_config import AppConfig
|
||||
from deerflow.sandbox.local.local_sandbox import LocalSandbox, PathMapping
|
||||
from deerflow.sandbox.local.local_sandbox_provider import LocalSandboxProvider
|
||||
|
||||
@ -313,8 +311,7 @@ class TestLocalSandboxProviderMounts:
|
||||
sandbox=sandbox_config,
|
||||
)
|
||||
|
||||
with patch.object(AppConfig, "current", return_value=config):
|
||||
provider = LocalSandboxProvider(app_config=config)
|
||||
provider = LocalSandboxProvider(app_config=config)
|
||||
|
||||
assert [m.container_path for m in provider._path_mappings] == ["/custom-skills"]
|
||||
|
||||
@ -335,8 +332,7 @@ class TestLocalSandboxProviderMounts:
|
||||
sandbox=sandbox_config,
|
||||
)
|
||||
|
||||
with patch.object(AppConfig, "current", return_value=config):
|
||||
provider = LocalSandboxProvider(app_config=config)
|
||||
provider = LocalSandboxProvider(app_config=config)
|
||||
|
||||
assert [m.container_path for m in provider._path_mappings] == ["/mnt/skills"]
|
||||
|
||||
@ -359,8 +355,7 @@ class TestLocalSandboxProviderMounts:
|
||||
sandbox=sandbox_config,
|
||||
)
|
||||
|
||||
with patch.object(AppConfig, "current", return_value=config):
|
||||
provider = LocalSandboxProvider(app_config=config)
|
||||
provider = LocalSandboxProvider(app_config=config)
|
||||
|
||||
assert [m.container_path for m in provider._path_mappings] == ["/mnt/skills"]
|
||||
|
||||
@ -383,7 +378,6 @@ class TestLocalSandboxProviderMounts:
|
||||
sandbox=sandbox_config,
|
||||
)
|
||||
|
||||
with patch.object(AppConfig, "current", return_value=config):
|
||||
provider = LocalSandboxProvider(app_config=config)
|
||||
provider = LocalSandboxProvider(app_config=config)
|
||||
|
||||
assert [m.container_path for m in provider._path_mappings] == ["/mnt/skills", "/mnt/data"]
|
||||
|
||||
@ -25,10 +25,7 @@ def _make_config(**memory_overrides) -> AppConfig:
|
||||
def test_queue_add_preserves_existing_correction_flag_for_same_thread() -> None:
|
||||
queue = MemoryUpdateQueue(_TEST_APP_CONFIG)
|
||||
|
||||
with (
|
||||
patch.object(AppConfig, "current", return_value=_make_config(enabled=True)),
|
||||
patch.object(queue, "_reset_timer"),
|
||||
):
|
||||
with patch.object(queue, "_reset_timer"):
|
||||
queue.add(thread_id="thread-1", messages=["first"], correction_detected=True)
|
||||
queue.add(thread_id="thread-1", messages=["second"], correction_detected=False)
|
||||
|
||||
@ -66,10 +63,7 @@ def test_process_queue_forwards_correction_flag_to_updater() -> None:
|
||||
def test_queue_add_preserves_existing_reinforcement_flag_for_same_thread() -> None:
|
||||
queue = MemoryUpdateQueue(_TEST_APP_CONFIG)
|
||||
|
||||
with (
|
||||
patch.object(AppConfig, "current", return_value=_make_config(enabled=True)),
|
||||
patch.object(queue, "_reset_timer"),
|
||||
):
|
||||
with patch.object(queue, "_reset_timer"):
|
||||
queue.add(thread_id="thread-1", messages=["first"], reinforcement_detected=True)
|
||||
queue.add(thread_id="thread-1", messages=["second"], reinforcement_detected=False)
|
||||
|
||||
|
||||
@ -25,7 +25,6 @@ def _enable_memory(monkeypatch):
|
||||
"""Ensure MemoryUpdateQueue.add() doesn't early-return on disabled memory."""
|
||||
config = MagicMock(spec=AppConfig)
|
||||
config.memory = MemoryConfig(enabled=True)
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: config))
|
||||
|
||||
|
||||
def test_conversation_context_has_user_id():
|
||||
|
||||
@ -71,10 +71,9 @@ class TestFileMemoryStorage:
|
||||
return mock_paths
|
||||
|
||||
with patch("deerflow.agents.memory.storage.get_paths", side_effect=mock_get_paths):
|
||||
with patch.object(AppConfig, "current", return_value=_app_config(storage_path="")):
|
||||
storage = FileMemoryStorage(_TEST_MEMORY_CONFIG)
|
||||
path = storage._get_memory_file_path(None)
|
||||
assert path == tmp_path / "memory.json"
|
||||
storage = FileMemoryStorage(_TEST_MEMORY_CONFIG)
|
||||
path = storage._get_memory_file_path(None)
|
||||
assert path == tmp_path / "memory.json"
|
||||
|
||||
def test_get_memory_file_path_agent(self, tmp_path):
|
||||
"""Should return per-agent memory file path when agent_name is provided."""
|
||||
@ -105,11 +104,10 @@ class TestFileMemoryStorage:
|
||||
return mock_paths
|
||||
|
||||
with patch("deerflow.agents.memory.storage.get_paths", side_effect=mock_get_paths):
|
||||
with patch.object(AppConfig, "current", return_value=_app_config(storage_path="")):
|
||||
storage = FileMemoryStorage(_TEST_MEMORY_CONFIG)
|
||||
memory = storage.load()
|
||||
assert isinstance(memory, dict)
|
||||
assert memory["version"] == "1.0"
|
||||
storage = FileMemoryStorage(_TEST_MEMORY_CONFIG)
|
||||
memory = storage.load()
|
||||
assert isinstance(memory, dict)
|
||||
assert memory["version"] == "1.0"
|
||||
|
||||
def test_save_writes_to_file(self, tmp_path):
|
||||
"""Should save memory data to file."""
|
||||
@ -121,12 +119,11 @@ class TestFileMemoryStorage:
|
||||
return mock_paths
|
||||
|
||||
with patch("deerflow.agents.memory.storage.get_paths", side_effect=mock_get_paths):
|
||||
with patch.object(AppConfig, "current", return_value=_app_config(storage_path="")):
|
||||
storage = FileMemoryStorage(_TEST_MEMORY_CONFIG)
|
||||
test_memory = {"version": "1.0", "facts": [{"content": "test fact"}]}
|
||||
result = storage.save(test_memory)
|
||||
assert result is True
|
||||
assert memory_file.exists()
|
||||
storage = FileMemoryStorage(_TEST_MEMORY_CONFIG)
|
||||
test_memory = {"version": "1.0", "facts": [{"content": "test fact"}]}
|
||||
result = storage.save(test_memory)
|
||||
assert result is True
|
||||
assert memory_file.exists()
|
||||
|
||||
def test_reload_forces_cache_invalidation(self, tmp_path):
|
||||
"""Should force reload from file and invalidate cache."""
|
||||
@ -140,18 +137,17 @@ class TestFileMemoryStorage:
|
||||
return mock_paths
|
||||
|
||||
with patch("deerflow.agents.memory.storage.get_paths", side_effect=mock_get_paths):
|
||||
with patch.object(AppConfig, "current", return_value=_app_config(storage_path="")):
|
||||
storage = FileMemoryStorage(_TEST_MEMORY_CONFIG)
|
||||
# First load
|
||||
memory1 = storage.load()
|
||||
assert memory1["facts"][0]["content"] == "initial fact"
|
||||
storage = FileMemoryStorage(_TEST_MEMORY_CONFIG)
|
||||
# First load
|
||||
memory1 = storage.load()
|
||||
assert memory1["facts"][0]["content"] == "initial fact"
|
||||
|
||||
# Update file directly
|
||||
memory_file.write_text('{"version": "1.0", "facts": [{"content": "updated fact"}]}')
|
||||
# Update file directly
|
||||
memory_file.write_text('{"version": "1.0", "facts": [{"content": "updated fact"}]}')
|
||||
|
||||
# Reload should get updated data
|
||||
memory2 = storage.reload()
|
||||
assert memory2["facts"][0]["content"] == "updated fact"
|
||||
# Reload should get updated data
|
||||
memory2 = storage.reload()
|
||||
assert memory2["facts"][0]["content"] == "updated fact"
|
||||
|
||||
|
||||
class TestGetMemoryStorage:
|
||||
@ -168,22 +164,19 @@ class TestGetMemoryStorage:
|
||||
|
||||
def test_returns_file_memory_storage_by_default(self):
|
||||
"""Should return FileMemoryStorage by default."""
|
||||
with patch.object(AppConfig, "current", return_value=_app_config(storage_class="deerflow.agents.memory.storage.FileMemoryStorage")):
|
||||
storage = get_memory_storage(_TEST_MEMORY_CONFIG)
|
||||
assert isinstance(storage, FileMemoryStorage)
|
||||
storage = get_memory_storage(_TEST_MEMORY_CONFIG)
|
||||
assert isinstance(storage, FileMemoryStorage)
|
||||
|
||||
def test_falls_back_to_file_memory_storage_on_error(self):
|
||||
"""Should fall back to FileMemoryStorage if configured storage fails to load."""
|
||||
with patch.object(AppConfig, "current", return_value=_app_config(storage_class="non.existent.StorageClass")):
|
||||
storage = get_memory_storage(_TEST_MEMORY_CONFIG)
|
||||
assert isinstance(storage, FileMemoryStorage)
|
||||
storage = get_memory_storage(_TEST_MEMORY_CONFIG)
|
||||
assert isinstance(storage, FileMemoryStorage)
|
||||
|
||||
def test_returns_singleton_instance(self):
|
||||
"""Should return the same instance on subsequent calls."""
|
||||
with patch.object(AppConfig, "current", return_value=_app_config(storage_class="deerflow.agents.memory.storage.FileMemoryStorage")):
|
||||
storage1 = get_memory_storage(_TEST_MEMORY_CONFIG)
|
||||
storage2 = get_memory_storage(_TEST_MEMORY_CONFIG)
|
||||
assert storage1 is storage2
|
||||
storage1 = get_memory_storage(_TEST_MEMORY_CONFIG)
|
||||
storage2 = get_memory_storage(_TEST_MEMORY_CONFIG)
|
||||
assert storage1 is storage2
|
||||
|
||||
def test_get_memory_storage_thread_safety(self):
|
||||
"""Should safely initialize the singleton even with concurrent calls."""
|
||||
@ -195,12 +188,11 @@ class TestGetMemoryStorage:
|
||||
# that the singleton initialization remains thread-safe.
|
||||
results.append(get_memory_storage(_TEST_MEMORY_CONFIG))
|
||||
|
||||
with patch.object(AppConfig, "current", return_value=_app_config(storage_class="deerflow.agents.memory.storage.FileMemoryStorage")):
|
||||
threads = [threading.Thread(target=get_storage) for _ in range(10)]
|
||||
for t in threads:
|
||||
t.start()
|
||||
for t in threads:
|
||||
t.join()
|
||||
threads = [threading.Thread(target=get_storage) for _ in range(10)]
|
||||
for t in threads:
|
||||
t.start()
|
||||
for t in threads:
|
||||
t.join()
|
||||
|
||||
# All results should be the exact same instance
|
||||
assert len(results) == 10
|
||||
@ -209,13 +201,11 @@ class TestGetMemoryStorage:
|
||||
def test_get_memory_storage_invalid_class_fallback(self):
|
||||
"""Should fall back to FileMemoryStorage if the configured class is not actually a class."""
|
||||
# Using a built-in function instead of a class
|
||||
with patch.object(AppConfig, "current", return_value=_app_config(storage_class="os.path.join")):
|
||||
storage = get_memory_storage(_TEST_MEMORY_CONFIG)
|
||||
assert isinstance(storage, FileMemoryStorage)
|
||||
storage = get_memory_storage(_TEST_MEMORY_CONFIG)
|
||||
assert isinstance(storage, FileMemoryStorage)
|
||||
|
||||
def test_get_memory_storage_non_subclass_fallback(self):
|
||||
"""Should fall back to FileMemoryStorage if the configured class is not a subclass of MemoryStorage."""
|
||||
# Using 'dict' as a class that is not a MemoryStorage subclass
|
||||
with patch.object(AppConfig, "current", return_value=_app_config(storage_class="builtins.dict")):
|
||||
storage = get_memory_storage(_TEST_MEMORY_CONFIG)
|
||||
assert isinstance(storage, FileMemoryStorage)
|
||||
storage = get_memory_storage(_TEST_MEMORY_CONFIG)
|
||||
assert isinstance(storage, FileMemoryStorage)
|
||||
|
||||
@ -36,12 +36,6 @@ def storage() -> FileMemoryStorage:
|
||||
return FileMemoryStorage(_TEST_MEMORY_CONFIG)
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _mock_current_config():
|
||||
"""Ensure AppConfig.current() returns a minimal config for all tests."""
|
||||
cfg = _mock_app_config()
|
||||
with patch.object(AppConfig, "current", return_value=cfg):
|
||||
yield
|
||||
|
||||
|
||||
class TestUserIsolatedStorage:
|
||||
|
||||
@ -518,7 +518,6 @@ class TestUpdateMemoryStructuredResponse:
|
||||
|
||||
with (
|
||||
patch.object(updater, "_get_model", return_value=self._make_mock_model(valid_json)),
|
||||
patch.object(AppConfig, "current", return_value=_memory_config(enabled=True)),
|
||||
patch("deerflow.agents.memory.updater.get_memory_data", return_value=_make_memory()),
|
||||
patch("deerflow.agents.memory.updater.get_memory_storage", return_value=MagicMock(save=MagicMock(return_value=True))),
|
||||
):
|
||||
@ -541,7 +540,6 @@ class TestUpdateMemoryStructuredResponse:
|
||||
|
||||
with (
|
||||
patch.object(updater, "_get_model", return_value=self._make_mock_model(list_content)),
|
||||
patch.object(AppConfig, "current", return_value=_memory_config(enabled=True)),
|
||||
patch("deerflow.agents.memory.updater.get_memory_data", return_value=_make_memory()),
|
||||
patch("deerflow.agents.memory.updater.get_memory_storage", return_value=MagicMock(save=MagicMock(return_value=True))),
|
||||
):
|
||||
@ -563,7 +561,6 @@ class TestUpdateMemoryStructuredResponse:
|
||||
|
||||
with (
|
||||
patch.object(updater, "_get_model", return_value=model),
|
||||
patch.object(AppConfig, "current", return_value=_memory_config(enabled=True)),
|
||||
patch("deerflow.agents.memory.updater.get_memory_data", return_value=_make_memory()),
|
||||
patch("deerflow.agents.memory.updater.get_memory_storage", return_value=MagicMock(save=MagicMock(return_value=True))),
|
||||
):
|
||||
@ -588,7 +585,6 @@ class TestUpdateMemoryStructuredResponse:
|
||||
|
||||
with (
|
||||
patch.object(updater, "_get_model", return_value=model),
|
||||
patch.object(AppConfig, "current", return_value=_memory_config(enabled=True)),
|
||||
patch("deerflow.agents.memory.updater.get_memory_data", return_value=_make_memory()),
|
||||
patch("deerflow.agents.memory.updater.get_memory_storage", return_value=MagicMock(save=MagicMock(return_value=True))),
|
||||
):
|
||||
@ -680,7 +676,6 @@ class TestReinforcementHint:
|
||||
|
||||
with (
|
||||
patch.object(updater, "_get_model", return_value=model),
|
||||
patch.object(AppConfig, "current", return_value=_memory_config(enabled=True)),
|
||||
patch("deerflow.agents.memory.updater.get_memory_data", return_value=_make_memory()),
|
||||
patch("deerflow.agents.memory.updater.get_memory_storage", return_value=MagicMock(save=MagicMock(return_value=True))),
|
||||
):
|
||||
@ -705,7 +700,6 @@ class TestReinforcementHint:
|
||||
|
||||
with (
|
||||
patch.object(updater, "_get_model", return_value=model),
|
||||
patch.object(AppConfig, "current", return_value=_memory_config(enabled=True)),
|
||||
patch("deerflow.agents.memory.updater.get_memory_data", return_value=_make_memory()),
|
||||
patch("deerflow.agents.memory.updater.get_memory_storage", return_value=MagicMock(save=MagicMock(return_value=True))),
|
||||
):
|
||||
@ -730,7 +724,6 @@ class TestReinforcementHint:
|
||||
|
||||
with (
|
||||
patch.object(updater, "_get_model", return_value=model),
|
||||
patch.object(AppConfig, "current", return_value=_memory_config(enabled=True)),
|
||||
patch("deerflow.agents.memory.updater.get_memory_data", return_value=_make_memory()),
|
||||
patch("deerflow.agents.memory.updater.get_memory_storage", return_value=MagicMock(save=MagicMock(return_value=True))),
|
||||
):
|
||||
|
||||
@ -72,8 +72,7 @@ class FakeChatModel(BaseChatModel):
|
||||
|
||||
|
||||
def _patch_factory(monkeypatch, app_config: AppConfig, model_class=FakeChatModel):
|
||||
"""Patch AppConfig.get, resolve_class, and tracing for isolated unit tests."""
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: app_config))
|
||||
"""Patch resolve_class and tracing for isolated unit tests."""
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: model_class)
|
||||
monkeypatch.setattr(factory_module, "build_tracing_callbacks", lambda: [])
|
||||
|
||||
@ -88,7 +87,7 @@ def test_uses_first_model_when_name_is_none(monkeypatch):
|
||||
_patch_factory(monkeypatch, cfg)
|
||||
|
||||
FakeChatModel.captured_kwargs = {}
|
||||
factory_module.create_chat_model(name=None, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name=None, app_config=cfg)
|
||||
|
||||
# resolve_class is called — if we reach here without ValueError, the correct model was used
|
||||
assert FakeChatModel.captured_kwargs.get("model") == "alpha"
|
||||
@ -96,11 +95,10 @@ def test_uses_first_model_when_name_is_none(monkeypatch):
|
||||
|
||||
def test_raises_when_model_not_found(monkeypatch):
|
||||
cfg = _make_app_config([_make_model("only-model")])
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: cfg))
|
||||
monkeypatch.setattr(factory_module, "build_tracing_callbacks", lambda: [])
|
||||
|
||||
with pytest.raises(ValueError, match="ghost-model"):
|
||||
factory_module.create_chat_model(name="ghost-model", app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="ghost-model", app_config=cfg)
|
||||
|
||||
|
||||
def test_appends_all_tracing_callbacks(monkeypatch):
|
||||
@ -109,7 +107,7 @@ def test_appends_all_tracing_callbacks(monkeypatch):
|
||||
monkeypatch.setattr(factory_module, "build_tracing_callbacks", lambda: ["smith-callback", "langfuse-callback"])
|
||||
|
||||
FakeChatModel.captured_kwargs = {}
|
||||
model = factory_module.create_chat_model(name="alpha", app_config=AppConfig.current())
|
||||
model = factory_module.create_chat_model(name="alpha", app_config=cfg)
|
||||
|
||||
assert model.callbacks == ["smith-callback", "langfuse-callback"]
|
||||
|
||||
@ -127,7 +125,7 @@ def test_thinking_enabled_raises_when_not_supported_but_when_thinking_enabled_is
|
||||
_patch_factory(monkeypatch, cfg)
|
||||
|
||||
with pytest.raises(ValueError, match="does not support thinking"):
|
||||
factory_module.create_chat_model(name="no-think", thinking_enabled=True, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="no-think", thinking_enabled=True, app_config=cfg)
|
||||
|
||||
|
||||
def test_thinking_enabled_raises_for_empty_when_thinking_enabled_explicitly_set(monkeypatch):
|
||||
@ -138,7 +136,7 @@ def test_thinking_enabled_raises_for_empty_when_thinking_enabled_explicitly_set(
|
||||
_patch_factory(monkeypatch, cfg)
|
||||
|
||||
with pytest.raises(ValueError, match="does not support thinking"):
|
||||
factory_module.create_chat_model(name="no-think-empty", thinking_enabled=True, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="no-think-empty", thinking_enabled=True, app_config=cfg)
|
||||
|
||||
|
||||
def test_thinking_enabled_merges_when_thinking_enabled_settings(monkeypatch):
|
||||
@ -147,7 +145,7 @@ def test_thinking_enabled_merges_when_thinking_enabled_settings(monkeypatch):
|
||||
_patch_factory(monkeypatch, cfg)
|
||||
|
||||
FakeChatModel.captured_kwargs = {}
|
||||
factory_module.create_chat_model(name="thinker", thinking_enabled=True, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="thinker", thinking_enabled=True, app_config=cfg)
|
||||
|
||||
assert FakeChatModel.captured_kwargs.get("temperature") == 1.0
|
||||
assert FakeChatModel.captured_kwargs.get("max_tokens") == 16000
|
||||
@ -183,7 +181,7 @@ def test_thinking_disabled_openai_gateway_format(monkeypatch):
|
||||
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
factory_module.create_chat_model(name="openai-gw", thinking_enabled=False, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="openai-gw", thinking_enabled=False, app_config=cfg)
|
||||
|
||||
assert captured.get("extra_body") == {"thinking": {"type": "disabled"}}
|
||||
assert captured.get("reasoning_effort") == "minimal"
|
||||
@ -216,7 +214,7 @@ def test_thinking_disabled_langchain_anthropic_format(monkeypatch):
|
||||
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
factory_module.create_chat_model(name="anthropic-native", thinking_enabled=False, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="anthropic-native", thinking_enabled=False, app_config=cfg)
|
||||
|
||||
assert captured.get("thinking") == {"type": "disabled"}
|
||||
assert "extra_body" not in captured
|
||||
@ -238,7 +236,7 @@ def test_thinking_disabled_no_when_thinking_enabled_does_nothing(monkeypatch):
|
||||
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
factory_module.create_chat_model(name="plain", thinking_enabled=False, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="plain", thinking_enabled=False, app_config=cfg)
|
||||
|
||||
assert "extra_body" not in captured
|
||||
assert "thinking" not in captured
|
||||
@ -278,7 +276,7 @@ def test_when_thinking_disabled_takes_precedence_over_hardcoded_disable(monkeypa
|
||||
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
factory_module.create_chat_model(name="custom-disable", thinking_enabled=False, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="custom-disable", thinking_enabled=False, app_config=cfg)
|
||||
|
||||
assert captured.get("extra_body") == {"thinking": {"type": "disabled"}}
|
||||
# User overrode the hardcoded "minimal" with "low"
|
||||
@ -310,7 +308,7 @@ def test_when_thinking_disabled_not_used_when_thinking_enabled(monkeypatch):
|
||||
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
factory_module.create_chat_model(name="wtd-ignored", thinking_enabled=True, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="wtd-ignored", thinking_enabled=True, app_config=cfg)
|
||||
|
||||
# when_thinking_enabled should apply, NOT when_thinking_disabled
|
||||
assert captured.get("extra_body") == {"thinking": {"type": "enabled"}}
|
||||
@ -339,7 +337,7 @@ def test_when_thinking_disabled_without_when_thinking_enabled_still_applies(monk
|
||||
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
factory_module.create_chat_model(name="wtd-only", thinking_enabled=False, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="wtd-only", thinking_enabled=False, app_config=cfg)
|
||||
|
||||
# when_thinking_disabled is now gated independently of has_thinking_settings
|
||||
assert captured.get("reasoning_effort") == "low"
|
||||
@ -370,7 +368,7 @@ def test_when_thinking_disabled_excluded_from_model_dump(monkeypatch):
|
||||
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
factory_module.create_chat_model(name="no-leak-wtd", thinking_enabled=True, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="no-leak-wtd", thinking_enabled=True, app_config=cfg)
|
||||
|
||||
# when_thinking_disabled value must NOT appear as a raw key
|
||||
assert "when_thinking_disabled" not in captured
|
||||
@ -394,7 +392,7 @@ def test_reasoning_effort_cleared_when_not_supported(monkeypatch):
|
||||
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
factory_module.create_chat_model(name="no-effort", thinking_enabled=False, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="no-effort", thinking_enabled=False, app_config=cfg)
|
||||
|
||||
assert captured.get("reasoning_effort") is None
|
||||
|
||||
@ -422,7 +420,7 @@ def test_reasoning_effort_preserved_when_supported(monkeypatch):
|
||||
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
factory_module.create_chat_model(name="effort-model", thinking_enabled=False, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="effort-model", thinking_enabled=False, app_config=cfg)
|
||||
|
||||
# When supports_reasoning_effort=True, it should NOT be cleared to None
|
||||
# The disable path sets it to "minimal"; supports_reasoning_effort=True keeps it
|
||||
@ -458,7 +456,7 @@ def test_thinking_shortcut_enables_thinking_when_thinking_enabled(monkeypatch):
|
||||
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
factory_module.create_chat_model(name="shortcut-model", thinking_enabled=True, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="shortcut-model", thinking_enabled=True, app_config=cfg)
|
||||
|
||||
assert captured.get("thinking") == thinking_settings
|
||||
|
||||
@ -488,7 +486,7 @@ def test_thinking_shortcut_disables_thinking_when_thinking_disabled(monkeypatch)
|
||||
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
factory_module.create_chat_model(name="shortcut-disable", thinking_enabled=False, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="shortcut-disable", thinking_enabled=False, app_config=cfg)
|
||||
|
||||
assert captured.get("thinking") == {"type": "disabled"}
|
||||
assert "extra_body" not in captured
|
||||
@ -520,7 +518,7 @@ def test_thinking_shortcut_merges_with_when_thinking_enabled(monkeypatch):
|
||||
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
factory_module.create_chat_model(name="merge-model", thinking_enabled=True, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="merge-model", thinking_enabled=True, app_config=cfg)
|
||||
|
||||
# Both the thinking shortcut and when_thinking_enabled settings should be applied
|
||||
assert captured.get("thinking") == thinking_settings
|
||||
@ -552,7 +550,7 @@ def test_thinking_shortcut_not_leaked_into_model_when_disabled(monkeypatch):
|
||||
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
factory_module.create_chat_model(name="no-leak", thinking_enabled=False, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="no-leak", thinking_enabled=False, app_config=cfg)
|
||||
|
||||
# The disable path should have set thinking to disabled (not the raw enabled shortcut)
|
||||
assert captured.get("thinking") == {"type": "disabled"}
|
||||
@ -590,7 +588,7 @@ def test_openai_compatible_provider_passes_base_url(monkeypatch):
|
||||
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
factory_module.create_chat_model(name="minimax-m2.5", app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="minimax-m2.5", app_config=cfg)
|
||||
|
||||
assert captured.get("model") == "MiniMax-M2.5"
|
||||
assert captured.get("base_url") == "https://api.minimax.io/v1"
|
||||
@ -638,11 +636,11 @@ def test_openai_compatible_provider_multiple_models(monkeypatch):
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
# Create first model
|
||||
factory_module.create_chat_model(name="minimax-m2.5", app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="minimax-m2.5", app_config=cfg)
|
||||
assert captured.get("model") == "MiniMax-M2.5"
|
||||
|
||||
# Create second model
|
||||
factory_module.create_chat_model(name="minimax-m2.5-highspeed", app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="minimax-m2.5-highspeed", app_config=cfg)
|
||||
assert captured.get("model") == "MiniMax-M2.5-highspeed"
|
||||
|
||||
|
||||
@ -670,7 +668,7 @@ def test_codex_provider_disables_reasoning_when_thinking_disabled(monkeypatch):
|
||||
monkeypatch.setattr(codex_provider_module, "CodexChatModel", FakeCodexChatModel)
|
||||
|
||||
FakeChatModel.captured_kwargs = {}
|
||||
factory_module.create_chat_model(name="codex", thinking_enabled=False, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="codex", thinking_enabled=False, app_config=cfg)
|
||||
|
||||
assert FakeChatModel.captured_kwargs.get("reasoning_effort") == "none"
|
||||
|
||||
@ -690,7 +688,7 @@ def test_codex_provider_preserves_explicit_reasoning_effort(monkeypatch):
|
||||
monkeypatch.setattr(codex_provider_module, "CodexChatModel", FakeCodexChatModel)
|
||||
|
||||
FakeChatModel.captured_kwargs = {}
|
||||
factory_module.create_chat_model(name="codex", thinking_enabled=True, reasoning_effort="high", app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="codex", thinking_enabled=True, reasoning_effort="high", app_config=cfg)
|
||||
|
||||
assert FakeChatModel.captured_kwargs.get("reasoning_effort") == "high"
|
||||
|
||||
@ -710,7 +708,7 @@ def test_codex_provider_defaults_reasoning_effort_to_medium(monkeypatch):
|
||||
monkeypatch.setattr(codex_provider_module, "CodexChatModel", FakeCodexChatModel)
|
||||
|
||||
FakeChatModel.captured_kwargs = {}
|
||||
factory_module.create_chat_model(name="codex", thinking_enabled=True, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="codex", thinking_enabled=True, app_config=cfg)
|
||||
|
||||
assert FakeChatModel.captured_kwargs.get("reasoning_effort") == "medium"
|
||||
|
||||
@ -731,7 +729,7 @@ def test_codex_provider_strips_unsupported_max_tokens(monkeypatch):
|
||||
monkeypatch.setattr(codex_provider_module, "CodexChatModel", FakeCodexChatModel)
|
||||
|
||||
FakeChatModel.captured_kwargs = {}
|
||||
factory_module.create_chat_model(name="codex", thinking_enabled=True, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="codex", thinking_enabled=True, app_config=cfg)
|
||||
|
||||
assert "max_tokens" not in FakeChatModel.captured_kwargs
|
||||
|
||||
@ -757,7 +755,7 @@ def test_thinking_disabled_vllm_chat_template_format(monkeypatch):
|
||||
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
factory_module.create_chat_model(name="vllm-qwen", thinking_enabled=False, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="vllm-qwen", thinking_enabled=False, app_config=cfg)
|
||||
|
||||
assert captured.get("extra_body") == {"top_k": 20, "chat_template_kwargs": {"thinking": False}}
|
||||
assert captured.get("reasoning_effort") is None
|
||||
@ -784,7 +782,7 @@ def test_thinking_disabled_vllm_enable_thinking_format(monkeypatch):
|
||||
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
factory_module.create_chat_model(name="vllm-qwen-enable", thinking_enabled=False, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="vllm-qwen-enable", thinking_enabled=False, app_config=cfg)
|
||||
|
||||
assert captured.get("extra_body") == {
|
||||
"top_k": 20,
|
||||
@ -818,7 +816,7 @@ def test_stream_usage_injected_for_openai_compatible_model(monkeypatch):
|
||||
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
factory_module.create_chat_model(name="deepseek", app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="deepseek", app_config=cfg)
|
||||
|
||||
assert captured.get("stream_usage") is True
|
||||
|
||||
@ -837,7 +835,7 @@ def test_stream_usage_not_injected_for_non_openai_model(monkeypatch):
|
||||
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
factory_module.create_chat_model(name="claude", app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="claude", app_config=cfg)
|
||||
|
||||
assert "stream_usage" not in captured
|
||||
|
||||
@ -867,7 +865,7 @@ def test_stream_usage_not_overridden_when_explicitly_set_in_config(monkeypatch):
|
||||
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
factory_module.create_chat_model(name="deepseek", app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="deepseek", app_config=cfg)
|
||||
|
||||
assert captured.get("stream_usage") is False
|
||||
|
||||
@ -897,7 +895,7 @@ def test_openai_responses_api_settings_are_passed_to_chatopenai(monkeypatch):
|
||||
|
||||
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
||||
|
||||
factory_module.create_chat_model(name="gpt-5-responses", app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="gpt-5-responses", app_config=cfg)
|
||||
|
||||
assert captured.get("use_responses_api") is True
|
||||
assert captured.get("output_version") == "responses/v1"
|
||||
@ -938,7 +936,7 @@ def test_no_duplicate_kwarg_when_reasoning_effort_in_config_and_thinking_disable
|
||||
_patch_factory(monkeypatch, cfg, model_class=CapturingModel)
|
||||
|
||||
# Must not raise TypeError
|
||||
factory_module.create_chat_model(name="doubao-model", thinking_enabled=False, app_config=AppConfig.current())
|
||||
factory_module.create_chat_model(name="doubao-model", thinking_enabled=False, app_config=cfg)
|
||||
|
||||
# kwargs (runtime) takes precedence: thinking-disabled path sets reasoning_effort=minimal
|
||||
assert captured.get("reasoning_effort") == "minimal"
|
||||
|
||||
@ -2,7 +2,6 @@ from types import SimpleNamespace
|
||||
from unittest.mock import patch
|
||||
|
||||
from deerflow.community.aio_sandbox.aio_sandbox import AioSandbox
|
||||
from deerflow.config.app_config import AppConfig
|
||||
from deerflow.sandbox.local.local_sandbox import LocalSandbox
|
||||
from deerflow.sandbox.search import GrepMatch, find_glob_matches, find_grep_matches
|
||||
from deerflow.sandbox.tools import glob_tool, grep_tool
|
||||
@ -111,8 +110,6 @@ def test_grep_tool_truncates_results(tmp_path, monkeypatch) -> None:
|
||||
(workspace / "main.py").write_text("TODO one\nTODO two\nTODO three\n", encoding="utf-8")
|
||||
|
||||
monkeypatch.setattr("deerflow.sandbox.tools.ensure_sandbox_initialized", lambda runtime: LocalSandbox(id="local"))
|
||||
# Prevent config.yaml tool config from overriding the caller-supplied max_results=2.
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: SimpleNamespace(get_tool_config=lambda name: None)))
|
||||
|
||||
result = grep_tool.func(
|
||||
runtime=runtime,
|
||||
@ -332,10 +329,6 @@ def test_glob_tool_honors_smaller_requested_max_results(tmp_path, monkeypatch) -
|
||||
(workspace / "c.py").write_text("print('c')\n", encoding="utf-8")
|
||||
|
||||
monkeypatch.setattr("deerflow.sandbox.tools.ensure_sandbox_initialized", lambda runtime: LocalSandbox(id="local"))
|
||||
monkeypatch.setattr(
|
||||
AppConfig, "current",
|
||||
staticmethod(lambda: SimpleNamespace(get_tool_config=lambda name: SimpleNamespace(model_extra={"max_results": 50}))),
|
||||
)
|
||||
|
||||
result = glob_tool.func(
|
||||
runtime=runtime,
|
||||
|
||||
@ -2,14 +2,12 @@ from types import SimpleNamespace
|
||||
|
||||
import pytest
|
||||
|
||||
from deerflow.config.app_config import AppConfig
|
||||
from deerflow.skills.security_scanner import scan_skill_content
|
||||
|
||||
|
||||
@pytest.mark.anyio
|
||||
async def test_scan_skill_content_blocks_when_model_unavailable(monkeypatch):
|
||||
config = SimpleNamespace(skill_evolution=SimpleNamespace(moderation_model_name=None))
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: config))
|
||||
monkeypatch.setattr("deerflow.skills.security_scanner.create_chat_model", lambda **kwargs: (_ for _ in ()).throw(RuntimeError("boom")))
|
||||
|
||||
result = await scan_skill_content(config, "---\nname: demo-skill\ndescription: demo\n---\n", executable=False)
|
||||
|
||||
@ -34,7 +34,6 @@ def test_skill_manage_create_and_patch(monkeypatch, tmp_path):
|
||||
skills=SimpleNamespace(get_skills_path=lambda: skills_root, container_path="/mnt/skills"),
|
||||
skill_evolution=SimpleNamespace(enabled=True, moderation_model_name=None),
|
||||
)
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: config))
|
||||
refresh_calls = []
|
||||
|
||||
async def _refresh(*a, **k):
|
||||
@ -76,7 +75,6 @@ def test_skill_manage_patch_replaces_single_occurrence_by_default(monkeypatch, t
|
||||
skills=SimpleNamespace(get_skills_path=lambda: skills_root, container_path="/mnt/skills"),
|
||||
skill_evolution=SimpleNamespace(enabled=True, moderation_model_name=None),
|
||||
)
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: config))
|
||||
|
||||
async def _refresh(*a, **k):
|
||||
return None
|
||||
@ -114,7 +112,6 @@ def test_skill_manage_rejects_public_skill_patch(monkeypatch, tmp_path):
|
||||
skills=SimpleNamespace(get_skills_path=lambda: skills_root, container_path="/mnt/skills"),
|
||||
skill_evolution=SimpleNamespace(enabled=True, moderation_model_name=None),
|
||||
)
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: config))
|
||||
|
||||
runtime = SimpleNamespace(context=_make_context("", config), config={"configurable": {}})
|
||||
|
||||
@ -137,7 +134,6 @@ def test_skill_manage_sync_wrapper_supported(monkeypatch, tmp_path):
|
||||
skills=SimpleNamespace(get_skills_path=lambda: skills_root, container_path="/mnt/skills"),
|
||||
skill_evolution=SimpleNamespace(enabled=True, moderation_model_name=None),
|
||||
)
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: config))
|
||||
refresh_calls = []
|
||||
|
||||
async def _refresh(*a, **k):
|
||||
@ -164,7 +160,6 @@ def test_skill_manage_rejects_support_path_traversal(monkeypatch, tmp_path):
|
||||
skills=SimpleNamespace(get_skills_path=lambda: skills_root, container_path="/mnt/skills"),
|
||||
skill_evolution=SimpleNamespace(enabled=True, moderation_model_name=None),
|
||||
)
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: config))
|
||||
|
||||
async def _refresh(*a, **k):
|
||||
return None
|
||||
|
||||
@ -46,7 +46,6 @@ def test_custom_skills_router_lifecycle(monkeypatch, tmp_path):
|
||||
skills=SimpleNamespace(get_skills_path=lambda: skills_root, container_path="/mnt/skills"),
|
||||
skill_evolution=SimpleNamespace(enabled=True, moderation_model_name=None),
|
||||
)
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: config))
|
||||
monkeypatch.setattr("app.gateway.routers.skills.scan_skill_content", lambda *args, **kwargs: _async_scan("allow", "ok"))
|
||||
refresh_calls = []
|
||||
|
||||
@ -96,7 +95,6 @@ def test_custom_skill_rollback_blocked_by_scanner(monkeypatch, tmp_path):
|
||||
skills=SimpleNamespace(get_skills_path=lambda: skills_root, container_path="/mnt/skills"),
|
||||
skill_evolution=SimpleNamespace(enabled=True, moderation_model_name=None),
|
||||
)
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: config))
|
||||
get_skill_history_file("demo-skill", config).write_text(
|
||||
'{"action":"human_edit","prev_content":' + json.dumps(original_content) + ',"new_content":' + json.dumps(edited_content) + "}\n",
|
||||
encoding="utf-8",
|
||||
@ -138,7 +136,6 @@ def test_custom_skill_delete_preserves_history_and_allows_restore(monkeypatch, t
|
||||
skills=SimpleNamespace(get_skills_path=lambda: skills_root, container_path="/mnt/skills"),
|
||||
skill_evolution=SimpleNamespace(enabled=True, moderation_model_name=None),
|
||||
)
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: config))
|
||||
monkeypatch.setattr("app.gateway.routers.skills.scan_skill_content", lambda *args, **kwargs: _async_scan("allow", "ok"))
|
||||
refresh_calls = []
|
||||
|
||||
@ -185,7 +182,6 @@ def test_update_skill_refreshes_prompt_cache_before_return(monkeypatch, tmp_path
|
||||
_app_cfg = AppConfig(sandbox=SandboxConfig(use="test"), extensions=ExtensionsConfig(mcp_servers={}, skills={}))
|
||||
|
||||
monkeypatch.setattr("app.gateway.routers.skills.load_skills", _load_skills)
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: _app_cfg))
|
||||
monkeypatch.setattr(AppConfig, "from_file", staticmethod(lambda: _app_cfg))
|
||||
monkeypatch.setattr(skills_router.ExtensionsConfig, "resolve_config_path", staticmethod(lambda: config_path))
|
||||
monkeypatch.setattr("app.gateway.routers.skills.refresh_skills_system_prompt_cache_async", _refresh)
|
||||
|
||||
@ -3,14 +3,12 @@
|
||||
Covers:
|
||||
- SubagentsAppConfig / SubagentOverrideConfig model validation and defaults
|
||||
- get_timeout_for() / get_max_turns_for() resolution logic
|
||||
- AppConfig.subagents field access via AppConfig.current()
|
||||
- AppConfig.subagents field access
|
||||
- registry.get_subagent_config() applies config overrides
|
||||
- registry.list_subagents() applies overrides for all agents
|
||||
- Polling timeout calculation in task_tool is consistent with config
|
||||
"""
|
||||
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from deerflow.config.app_config import AppConfig
|
||||
@ -133,17 +131,16 @@ class TestMaxTurnsResolution:
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# AppConfig.subagents via AppConfig.current()
|
||||
# AppConfig.subagents
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestAppConfigSubagents:
|
||||
def test_load_global_timeout(self):
|
||||
cfg = _make_config(timeout_seconds=300, max_turns=120)
|
||||
with patch.object(AppConfig, "current", return_value=cfg):
|
||||
sub = AppConfig.current().subagents
|
||||
assert sub.timeout_seconds == 300
|
||||
assert sub.max_turns == 120
|
||||
sub = cfg.subagents
|
||||
assert sub.timeout_seconds == 300
|
||||
assert sub.max_turns == 120
|
||||
|
||||
def test_load_with_per_agent_overrides(self):
|
||||
cfg = _make_config(
|
||||
@ -154,29 +151,26 @@ class TestAppConfigSubagents:
|
||||
"bash": {"timeout_seconds": 60, "max_turns": 80},
|
||||
},
|
||||
)
|
||||
with patch.object(AppConfig, "current", return_value=cfg):
|
||||
sub = AppConfig.current().subagents
|
||||
assert sub.get_timeout_for("general-purpose") == 1800
|
||||
assert sub.get_timeout_for("bash") == 60
|
||||
assert sub.get_max_turns_for("general-purpose", 100) == 200
|
||||
assert sub.get_max_turns_for("bash", 60) == 80
|
||||
sub = cfg.subagents
|
||||
assert sub.get_timeout_for("general-purpose") == 1800
|
||||
assert sub.get_timeout_for("bash") == 60
|
||||
assert sub.get_max_turns_for("general-purpose", 100) == 200
|
||||
assert sub.get_max_turns_for("bash", 60) == 80
|
||||
|
||||
def test_load_partial_override(self):
|
||||
cfg = _make_config(
|
||||
timeout_seconds=600,
|
||||
agents={"bash": {"timeout_seconds": 120, "max_turns": 70}},
|
||||
)
|
||||
with patch.object(AppConfig, "current", return_value=cfg):
|
||||
sub = AppConfig.current().subagents
|
||||
assert sub.get_timeout_for("general-purpose") == 600
|
||||
assert sub.get_timeout_for("bash") == 120
|
||||
assert sub.get_max_turns_for("general-purpose", 100) == 100
|
||||
assert sub.get_max_turns_for("bash", 60) == 70
|
||||
sub = cfg.subagents
|
||||
assert sub.get_timeout_for("general-purpose") == 600
|
||||
assert sub.get_timeout_for("bash") == 120
|
||||
assert sub.get_max_turns_for("general-purpose", 100) == 100
|
||||
assert sub.get_max_turns_for("bash", 60) == 70
|
||||
|
||||
def test_load_empty_uses_defaults(self):
|
||||
cfg = _make_config()
|
||||
with patch.object(AppConfig, "current", return_value=cfg):
|
||||
sub = AppConfig.current().subagents
|
||||
assert sub.timeout_seconds == 900
|
||||
assert sub.max_turns is None
|
||||
assert sub.agents == {}
|
||||
sub = cfg.subagents
|
||||
assert sub.timeout_seconds == 900
|
||||
assert sub.max_turns is None
|
||||
assert sub.agents == {}
|
||||
|
||||
@ -7,7 +7,6 @@ from unittest.mock import MagicMock, patch
|
||||
from langchain_core.messages import AIMessage, HumanMessage, ToolMessage
|
||||
|
||||
from deerflow.client import DeerFlowClient
|
||||
from deerflow.config.app_config import AppConfig
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _serialize_message — usage_metadata passthrough
|
||||
@ -155,8 +154,7 @@ class TestStreamUsageIntegration:
|
||||
"""Test that stream() emits usage_metadata in messages-tuple and end events."""
|
||||
|
||||
def _make_client(self):
|
||||
with patch.object(AppConfig, "current", return_value=_mock_app_config()):
|
||||
return DeerFlowClient()
|
||||
return DeerFlowClient()
|
||||
|
||||
def test_stream_emits_usage_in_messages_tuple(self):
|
||||
"""messages-tuple AI event should include usage_metadata when present."""
|
||||
|
||||
@ -6,7 +6,6 @@ import sys
|
||||
import pytest
|
||||
from langchain_core.tools import tool as langchain_tool
|
||||
|
||||
from deerflow.config.app_config import AppConfig
|
||||
from deerflow.config.tool_search_config import ToolSearchConfig
|
||||
from deerflow.tools.builtins.tool_search import (
|
||||
DeferredToolRegistry,
|
||||
@ -255,42 +254,42 @@ class TestToolSearchTool:
|
||||
|
||||
|
||||
class TestDeferredToolsPromptSection:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _mock_app_config(self, monkeypatch):
|
||||
@pytest.fixture
|
||||
def mock_config(self):
|
||||
"""Provide a minimal AppConfig mock so tests don't need config.yaml."""
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
from deerflow.config.tool_search_config import ToolSearchConfig
|
||||
|
||||
mock_config = MagicMock()
|
||||
mock_config.tool_search = ToolSearchConfig() # disabled by default
|
||||
monkeypatch.setattr(AppConfig, "current", staticmethod(lambda: mock_config))
|
||||
config = MagicMock()
|
||||
config.tool_search = ToolSearchConfig() # disabled by default
|
||||
return config
|
||||
|
||||
def test_empty_when_disabled(self):
|
||||
def test_empty_when_disabled(self, mock_config):
|
||||
from deerflow.agents.lead_agent.prompt import get_deferred_tools_prompt_section
|
||||
|
||||
# tool_search.enabled defaults to False
|
||||
section = get_deferred_tools_prompt_section(AppConfig.current())
|
||||
section = get_deferred_tools_prompt_section(mock_config)
|
||||
assert section == ""
|
||||
|
||||
def test_empty_when_enabled_but_no_registry(self, monkeypatch):
|
||||
def test_empty_when_enabled_but_no_registry(self, mock_config):
|
||||
from deerflow.agents.lead_agent.prompt import get_deferred_tools_prompt_section
|
||||
AppConfig.current().tool_search = ToolSearchConfig(enabled=True)
|
||||
section = get_deferred_tools_prompt_section(AppConfig.current())
|
||||
mock_config.tool_search = ToolSearchConfig(enabled=True)
|
||||
section = get_deferred_tools_prompt_section(mock_config)
|
||||
assert section == ""
|
||||
|
||||
def test_empty_when_enabled_but_empty_registry(self, monkeypatch):
|
||||
def test_empty_when_enabled_but_empty_registry(self, mock_config):
|
||||
from deerflow.agents.lead_agent.prompt import get_deferred_tools_prompt_section
|
||||
AppConfig.current().tool_search = ToolSearchConfig(enabled=True)
|
||||
mock_config.tool_search = ToolSearchConfig(enabled=True)
|
||||
set_deferred_registry(DeferredToolRegistry())
|
||||
section = get_deferred_tools_prompt_section(AppConfig.current())
|
||||
section = get_deferred_tools_prompt_section(mock_config)
|
||||
assert section == ""
|
||||
|
||||
def test_lists_tool_names(self, registry, monkeypatch):
|
||||
def test_lists_tool_names(self, registry, mock_config):
|
||||
from deerflow.agents.lead_agent.prompt import get_deferred_tools_prompt_section
|
||||
AppConfig.current().tool_search = ToolSearchConfig(enabled=True)
|
||||
mock_config.tool_search = ToolSearchConfig(enabled=True)
|
||||
set_deferred_registry(registry)
|
||||
section = get_deferred_tools_prompt_section(AppConfig.current())
|
||||
section = get_deferred_tools_prompt_section(mock_config)
|
||||
assert "<available-deferred-tools>" in section
|
||||
assert "</available-deferred-tools>" in section
|
||||
assert "github_create_issue" in section
|
||||
|
||||
@ -239,11 +239,9 @@ Commit: `a934a822`.
|
||||
|
||||
# Phase 2: Pure explicit parameter passing
|
||||
|
||||
> **Status:** Partially shipped. Tasks P2-1 through P2-5 merged (explicit-config primitives added alongside `AppConfig.current()` fallbacks). P2-6 through P2-10 remain open — they remove the fallbacks and finish the migration.
|
||||
> **Status:** Shipped. P2-1..P2-5 landed first with `AppConfig.current()` kept as a transition fallback; P2-6..P2-10 landed together in commit `84dccef2` to eliminate the fallback and delete the ambient-lookup surface entirely. `AppConfig` is now a pure Pydantic value object with no process-global state and no classmethod accessors.
|
||||
>
|
||||
> **Design:** [§8 of the design doc](./2026-04-12-config-refactor-design.md#8-phase-2-pure-explicit-parameter-passing)
|
||||
>
|
||||
> **Goal:** Delete `AppConfig._global`, `_override`, `init`, `current`, `set_override`, `reset_override`. `AppConfig` becomes a pure Pydantic value object. Every consumer receives config as an explicit parameter.
|
||||
|
||||
## Shipped commits
|
||||
|
||||
@ -254,39 +252,41 @@ Commit: `a934a822`.
|
||||
| `f8738d1e` | P2-3 | H (Client) | `DeerFlowClient.__init__(config=...)` captures config locally; multi-client isolation test pins invariant |
|
||||
| `23b424e7` | P2-4 | B (Agent construction) | `make_lead_agent`, `_build_middlewares`, `_resolve_model_name`, `build_lead_runtime_middlewares` accept optional `app_config` |
|
||||
| `74b7a7ef` | P2-5 (partial) | D (Runtime) | `RunContext` gains `app_config` field; Worker builds `DeerFlowContext` from it; Gateway `deps.get_run_context` populates it. Standalone providers (checkpointer/store/stream_bridge) already accept optional config from Phase 1 |
|
||||
| `84dccef2` | P2-6..P2-10 | C+E+F+I + deletion | Memory closure-captures `MemoryConfig`; sandbox/skills/community/factories/tools thread `app_config` end-to-end; `resolve_context()` rejects non-typed runtime.context; `AppConfig.current()` removed; `get_sandbox_provider(app_config)` required; `make_lead_agent` LangGraph-Server bootstrap path loads via `AppConfig.from_file()`. All 2337 non-e2e tests pass. |
|
||||
|
||||
All five commits preserve backward compatibility by keeping `AppConfig.current()` as a fallback. No caller is broken.
|
||||
## Completed tasks (P2-6 through P2-10)
|
||||
|
||||
## Deferred tasks (P2-6 through P2-10)
|
||||
All landed in `84dccef2`.
|
||||
|
||||
Each remaining task deletes a `AppConfig.current()` call path after migrating the consumers that still use it. They can land incrementally.
|
||||
### P2-6: Memory subsystem closure-captured config (Category C) — shipped
|
||||
- [x] `MemoryConfig` captured at enqueue time so the Timer thread survives the ContextVar boundary.
|
||||
- [x] `deerflow/agents/memory/{queue,updater,storage}.py` no longer read any process-global.
|
||||
|
||||
### P2-6: Memory subsystem closure-captured config (Category C)
|
||||
- Files: `deerflow/agents/memory/{queue,updater,storage}.py` (8 calls)
|
||||
- Pattern: `MemoryQueue.__init__(config: MemoryConfig)`, captured in Timer closure so the callback thread has config without consulting any global.
|
||||
- Risk: medium — Timer thread runs outside ContextVar scope; closure capture at enqueue time is the only safe path.
|
||||
### P2-7: Sandbox / skills / factories / tools / community (Categories E+F) — shipped
|
||||
- [x] `sandbox/tools.py` helpers take `app_config` explicitly; the `_cached` attribute trick is gone.
|
||||
- [x] `sandbox/security.py`, `sandbox/sandbox_provider.py`, `sandbox/local/local_sandbox_provider.py`, `community/aio_sandbox/aio_sandbox_provider.py` all require `app_config`.
|
||||
- [x] `skills/manager.py` + `skills/loader.py` + `agents/lead_agent/prompt.py` cache refresh thread `app_config` through the worker thread via closure.
|
||||
- [x] Community tools (tavily, jina, firecrawl, exa, ddg, image_search, infoquest, aio_sandbox) read `resolve_context(runtime).app_config`.
|
||||
- [x] `subagents/registry.py` (`get_subagent_config`, `list_subagents`, `get_available_subagent_names`) take `app_config`.
|
||||
- [x] `models/factory.py::create_chat_model` and `tools/tools.py::get_available_tools` require `app_config`.
|
||||
|
||||
### P2-7: Sandbox / skills / factories / tools / community (Categories E+F)
|
||||
- Files: ~20 modules, ~35 call sites
|
||||
- Pattern: function signature gains `config: AppConfig`; caller threads it down.
|
||||
- Risk: low — mechanical. Each file is self-contained.
|
||||
### P2-8: Test fixtures (Category I) — shipped
|
||||
- [x] `conftest.py` autouse fixture no longer monkey-patches `AppConfig.current`; it only stubs `from_file()` so tests don't need a real `config.yaml`.
|
||||
- [x] ~90 call sites migrated: `patch.object(AppConfig, "current", ...)` removed where production no longer calls it (≈56 sites), and for the remaining ~10 files whose tests called `AppConfig.current()` themselves, the tests now hold the config in a local variable and pass it explicitly.
|
||||
- [x] `test_deer_flow_context.py` updated to assert that `resolve_context()` raises on dict/None contexts.
|
||||
- [x] `grep -rn 'AppConfig\.current' backend/tests` is clean.
|
||||
|
||||
### P2-8: Test fixtures (Category I)
|
||||
- Files: ~18 test files, ~91 `patch.object(AppConfig, "current")` or `_global` mutations
|
||||
- Pattern: replace mock with `test_config` fixture returning an `AppConfig`; pass explicitly to function under test.
|
||||
- Risk: medium — largest diff. Can land file-by-file.
|
||||
### P2-9: Simplify `resolve_context()` — shipped
|
||||
- [x] `resolve_context(runtime)` returns `runtime.context` when it is a `DeerFlowContext`; any other shape raises `RuntimeError` pointing at the composition root that should have attached the typed context.
|
||||
- [x] The dict-context and `get_config().configurable` fallbacks are deleted.
|
||||
|
||||
### P2-9: Simplify `resolve_context()`
|
||||
- File: `deerflow/config/deer_flow_context.py`
|
||||
- Pattern: remove `AppConfig.current()` fallback; raise on non-`DeerFlowContext` runtime.context.
|
||||
- Risk: low — one function. Depends on P2-6/7/8 to not leave dict-context callers.
|
||||
|
||||
### P2-10: Delete `AppConfig` lifecycle
|
||||
- Files: `config/app_config.py`, `tests/conftest.py`, `tests/test_app_config_reload.py`, `backend/CLAUDE.md`
|
||||
- Pattern: grep confirms zero callers of `current()`, `init()`, `set_override()`, `reset_override()`; delete them plus `_global` and `_override`.
|
||||
- Risk: high if P2-6/7/8 incomplete — grep gate must be clean.
|
||||
|
||||
The detailed task-level step descriptions below were written before Phase 2 was split into shippable chunks. They remain accurate for the work that lies ahead.
|
||||
### P2-10: Delete `AppConfig` lifecycle — shipped
|
||||
- [x] `AppConfig.current()` classmethod removed.
|
||||
- [x] `_global` / `_override` / `init` / `set_override` / `reset_override` already gone as of Phase 1; nothing left to delete on the ambient side.
|
||||
- [x] LangGraph Server bootstrap uses `AppConfig.from_file()` inside `make_lead_agent` — a pure load, not an ambient lookup.
|
||||
- [x] `backend/CLAUDE.md` Config Lifecycle section rewritten to describe the explicit-parameter design.
|
||||
- [x] `app/gateway/deps.py` docstrings no longer mention `AppConfig.current()`.
|
||||
- [x] Production grep confirms zero `AppConfig.current()` call sites in `backend/packages` or `backend/app`.
|
||||
|
||||
## Rationale
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user