Escape shell variables to prevent Docker Compose from attempting
substitution at parse time. Rename allow_blocking_flag to allow_blocking
for consistency with dev version.
Fixes the 'allow_blocking_flag not set' warning and enables --allow-blocking
flag to work correctly.
Bug fixes:
- Sanitize log params to prevent log injection (CodeQL)
- Reset threads_meta.status to idle/error when run completes
- Attach messages only to latest checkpoint in /history response
- Write threads_meta on POST /threads so new threads appear in search
Lint fixes:
- Remove unused imports (journal.py, migrations/env.py, test_converters.py)
- Convert lambda to named function (engine.py, Ruff E731)
- Remove unused logger definitions in repos (Ruff F841)
- Add logging to JSONL decode errors and empty except blocks
- Separate assert side-effects in tests (CodeQL)
- Remove unused local variables in tests (Ruff F841)
- Fix max_trace_content truncation to use byte length, not char length
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Fix naive datetime.now() → datetime.now(UTC) in all ORM models
- Fix seq race condition in DbRunEventStore.put() with FOR UPDATE
and UNIQUE(thread_id, seq) constraint
- Encapsulate _store access in RunManager.update_run_completion()
- Deduplicate _store.put() logic in RunManager via _persist_to_store()
- Add update_run_completion to RunStore ABC + MemoryRunStore
- Wire follow_up_to_run_id through the full create path
- Add error recovery to RunJournal._flush_sync() lost-event scenario
- Add migration note for search_threads breaking change
- Fix test_checkpointer_none_fix mock to set database=None
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Keep both DATABASE_URL (from persistence-scaffold) and WECOM
credentials (from main) after the merge.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(memory): case-insensitive fact deduplication and positive reinforcement detection
Two fixes to the memory system:
1. _fact_content_key() now lowercases content before comparison, preventing
semantically duplicate facts like "User prefers Python" and "user prefers
python" from being stored separately.
2. Adds detect_reinforcement() to MemoryMiddleware (closes#1719), mirroring
detect_correction(). When users signal approval ("yes exactly", "perfect",
"完全正确", etc.), the memory updater now receives reinforcement_detected=True
and injects a hint prompting the LLM to record confirmed preferences and
behaviors with high confidence.
Changes across the full signal path:
- memory_middleware.py: _REINFORCEMENT_PATTERNS + detect_reinforcement()
- queue.py: reinforcement_detected field in ConversationContext and add()
- updater.py: reinforcement_detected param in update_memory() and
update_memory_from_conversation(); builds reinforcement_hint alongside
the existing correction_hint
Tests: 11 new tests covering deduplication, hint injection, and signal
detection (Chinese + English patterns, window boundary, conflict with correction).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(memory): address Copilot review comments on reinforcement detection
- Tighten _REINFORCEMENT_PATTERNS: remove 很好, require punctuation/end-of-string boundaries on remaining patterns, split this-is-good into stricter variants
- Suppress reinforcement_detected when correction_detected is true to avoid mixed-signal noise
- Use casefold() instead of lower() for Unicode-aware fact deduplication
- Add missing test coverage for reinforcement_detected OR merge and forwarding in queue
---------
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
* Rename BACKEND_TODO.md to TODO.md in documentation
* Update MCP Setup Guide link in CONTRIBUTING.md
* Update reference to config.yaml path in documentation
* Fix config file path in TITLE_GENERATION_IMPLEMENTATION.md
Updated the path to the example config file in the documentation.
* fix(docker): use multi-stage build to remove build-essential from runtime image
The build-essential toolchain (~200 MB) was only needed for compiling
native Python extensions during `uv sync` but remained in the final
image, increasing size and attack surface. Split the Dockerfile into
a builder stage (with build-essential) and a clean runtime stage that
copies only the compiled artifacts, Node.js, Docker CLI, and uv.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(docker): add dev stage and pin docker:cli per review feedback
Address Copilot review comments:
- Add a `dev` build stage (FROM builder) that retains build-essential
so startup-time `uv sync` in dev containers can compile from source
- Update docker-compose-dev.yaml to use `target: dev` for gateway and
langgraph services
- Keep the clean runtime stage (no build-essential) as the default
final stage for production builds
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
sandbox_from_runtime() and ensure_sandbox_initialized() write
sandbox_id into runtime.context after acquiring a sandbox. When
lazy_init=True and no context is supplied to the graph run,
runtime.context is None (the LangGraph default), causing a TypeError
on the assignment.
Add `if runtime.context is not None` guards at all three write sites.
Reads already had equivalent guards (e.g. `runtime.context.get(...) if
runtime.context else None`); this brings writes into line.
Previously, the list endpoint always returned soul=null because
_agent_config_to_response() was called without include_soul=True.
This caused confusion since PUT /api/agents/{name} and GET /api/agents/{name}
both returned the soul content, but the list endpoint silently omitted it.
Co-authored-by: octo-patch <octo-patch@users.noreply.github.com>
Add three new public skills to enhance DeerFlow's content creation capabilities:
- **academic-paper-review**: Structured peer-review-quality analysis of
research papers following top-venue review standards (NeurIPS, ICML, ACL).
Covers methodology assessment, contribution evaluation, literature
positioning, and constructive feedback with a 3-phase workflow.
- **code-documentation**: Professional documentation generation for software
projects, including README generation, API reference docs, architecture
documentation with Mermaid diagrams, and inline code documentation
supporting Python, TypeScript, Go, Rust, and Java conventions.
- **newsletter-generation**: Curated newsletter creation with research
workflow, supporting daily digest, weekly roundup, deep-dive, and industry
briefing formats. Includes audience-specific tone adaptation and
multi-source content curation.
All skills:
- Follow the existing SKILL.md frontmatter convention (name + description)
- Pass the official _validate_skill_frontmatter() validation
- Use hyphen-case naming consistent with existing skills
- Contain only allowed frontmatter properties
- Include comprehensive examples, quality checklists, and output templates
Without the config, the middleware:title tag was not injected,
causing the LLM response to be recorded as a lead_agent ai_message
in run_events.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- POST /api/threads/{thread_id}/history now combines two data sources:
checkpointer for checkpoint_id, metadata, title, thread_data;
event store for messages (complete history, not truncated by summarization)
- Strip internal LangGraph metadata keys from response
- Remove full channel_values serialization in favor of selective fields
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- POST /api/threads/search now queries threads_meta table directly,
removing the two-phase Store + Checkpointer scan approach
- Add ThreadMetaRepository.search() with metadata/status filters
- Add ThreadMetaRepository.update_display_name() for title sync
- Worker syncs checkpoint title to threads_meta.display_name on run completion
- Map display_name to values.title in search response for API compatibility
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Message events (ai_message, ai_tool_call, tool_result, human_message) now use
BaseMessage.model_dump() format, matching LangGraph checkpoint values.messages
- on_tool_end extracts tool_call_id/name/status from ToolMessage objects
- on_tool_error now emits tool_result message events with error status
- record_middleware uses middleware:{tag} event_type and middleware category
- Summarization custom events use middleware:summarize category
- TitleMiddleware injects middleware:title tag via get_config() inheritance
- SummarizationMiddleware model bound with middleware:summarize tag
- Worker writes human_message using HumanMessage.model_dump()
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(uploads): guide agent to use grep/glob/read_file for uploaded documents
Add workflow guidance to the <uploaded_files> context block so the agent
knows to use grep and glob (added in #1784) alongside read_file when
working with uploaded documents, rather than falling back to web search.
This is the final piece of the three-PR PDF agentic search pipeline:
- PR1 (#1727): pymupdf4llm converter produces structured Markdown with headings
- PR2 (#1738): document outline injected into agent context with line numbers
- PR3 (this): agent guided to use outline + grep + read_file workflow
* feat(uploads): add file-first priority and fallback guidance to uploaded_files context
* fix(uploads): handle split-bold headings and ** ** artefacts in extract_outline
- Add _clean_bold_title() to merge adjacent bold spans (** **) produced
by pymupdf4llm when bold text crosses span boundaries
- Add _SPLIT_BOLD_HEADING_RE (Style 3) to recognise **<num>** **<title>**
headings common in academic papers; excludes pure-number table headers
and rows with more than 4 bold blocks
- When outline is empty, read first 5 non-empty lines of the .md as a
content preview and surface a grep hint in the agent context
- Update _format_file_entry to render the preview + grep hint instead of
silently omitting the outline section
- Add 3 new extract_outline tests and 2 new middleware tests (65 total)
* fix(uploads): address Copilot review comments on extract_outline regex
- Replace ASCII [A-Za-z] guard with negative lookahead to support non-ASCII
titles (e.g. **1** **概述**); pure-numeric/punctuation blocks still excluded
- Replace .+ with [^*]+ and cap repetition at {0,2} (four blocks total) to
keep _SPLIT_BOLD_HEADING_RE linear and avoid ReDoS on malformed input
- Remove now-redundant len(blocks) <= 4 code-level check (enforced by regex)
- Log debug message with exc_info when preview extraction fails
Server-rendered data-variant={undefined} didn't match client hydration.
Now only render data-variant and data-size when explicitly set.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: JeffJiang <for-eleven@hotmail.com>
* feat(uploads): guide agent to use grep/glob/read_file for uploaded documents
Add workflow guidance to the <uploaded_files> context block so the agent
knows to use grep and glob (added in #1784) alongside read_file when
working with uploaded documents, rather than falling back to web search.
This is the final piece of the three-PR PDF agentic search pipeline:
- PR1 (#1727): pymupdf4llm converter produces structured Markdown with headings
- PR2 (#1738): document outline injected into agent context with line numbers
- PR3 (this): agent guided to use outline + grep + read_file workflow
* feat(uploads): add file-first priority and fallback guidance to uploaded_files context
* fix: add missing DEER_FLOW_CONFIG_PATH and DEER_FLOW_EXTENSIONS_CONFIG_PATH env vars to gateway service (fixes#1829)
The gateway service was missing these two environment variables that tell
it where to find the config files inside the container. Without them,
the gateway reads DEER_FLOW_CONFIG_PATH from the host's .env file (set
to a host filesystem path), which is not accessible inside the container,
causing FileNotFoundError on startup. The langgraph service already had
these variables set correctly.
* fix: remove nginx Plus-only zone/resolve directives from nginx.conf (fixes#1744)
The `zone` and `resolve` parameters in upstream server directives are
nginx Plus features not available in the standard `nginx:alpine` image.
This caused nginx to fail at startup with:
[emerg] invalid parameter "resolve" in /etc/nginx/nginx.conf:25
Remove these directives so the config is compatible with open-source nginx.
Docker's internal DNS (127.0.0.11, already configured via `resolver`) handles
service name resolution. The `resolver` directive is kept for the provisioner
location which uses variable-based proxy_pass for optional-service support.
The gateway service was missing these two environment variables that tell
it where to find the config files inside the container. Without them,
the gateway reads DEER_FLOW_CONFIG_PATH from the host's .env file (set
to a host filesystem path), which is not accessible inside the container,
causing FileNotFoundError on startup. The langgraph service already had
these variables set correctly.
* fix: inject longTermBackground into memory prompt
The format_memory_for_injection function only processed recentMonths and
earlierContext from the history section, silently dropping longTermBackground.
The LLM writes longTermBackground correctly and it persists to memory.json,
but it was never injected into the system prompt — making the user's
long-term background invisible to the AI.
Add the missing field handling and a regression test.
* fix(middleware): handle list-type AIMessage.content in LoopDetectionMiddleware
LangChain AIMessage.content can be str | list. When using providers that
return structured content blocks (e.g. Anthropic thinking mode, certain
OpenAI-compatible gateways), content is a list of dicts like
[{"type": "text", "text": "..."}].
The hard_limit branch in _apply() concatenated content with a string via
(last_msg.content or "") + f"\n\n{_HARD_STOP_MSG}", which raises
TypeError when content is a non-empty list (list + str is invalid).
Add _append_text() static method that:
- Returns the text directly when content is None
- Appends a {"type": "text"} block when content is a list
- Falls back to string concatenation when content is a str
This is consistent with how other modules in the project already handle
list content (client.py._extract_text, memory_middleware, executor.py).
* test(middleware): add unit tests for _append_text and list content hard stop
Add regression tests to verify LoopDetectionMiddleware handles list-type
AIMessage.content correctly during hard stop:
- TestAppendText: unit tests for the new _append_text() static method
covering None, str, list (including empty list) content types
- TestHardStopWithListContent: integration tests verifying hard stop
works correctly with list content (Anthropic thinking mode), None
content, and str content
Requested by reviewer in PR #1823.
* fix(middleware): improve _append_text robustness and test isolation
- Add explicit isinstance(content, str) check with fallback for
unexpected types (coerce to str) to prevent TypeError on edge cases
- Deep-copy list content in _make_state() test helper to prevent
shared mutable references across test iterations
- Add test_unexpected_type_coerced_to_str: verify fallback for
non-str/list/None content types
- Add test_list_content_not_mutated_in_place: verify _append_text
does not modify the original list
* style: fix ruff format whitespace in test file
---------
Co-authored-by: ppyt <14163465+ppyt@users.noreply.github.com>
Add on_chat_model_start to capture structured prompt messages as llm_request events.
Replace llm_end trace events with llm_response using OpenAI Chat Completions format.
Track llm_call_index to pair request/response events.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Cache tool_call_id from on_tool_start keyed by run_id as fallback for on_tool_end,
then emit a tool_result message event (role=tool, tool_call_id, content) after each
successful tool completion.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- ai_message content now uses {"role": "assistant", "content": "..."} format
- New ai_tool_call message event emitted when lead_agent LLM responds with tool_calls
- ai_tool_call uses langchain_to_openai_message converter for consistent format
- Both events include finish_reason in metadata ("stop" or "tool_calls")
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Pure functions langchain_to_openai_message, langchain_to_openai_completion,
langchain_messages_to_openai, and _infer_finish_reason for converting
LangChain BaseMessage objects to OpenAI Chat Completions format, used by
RunJournal for event storage. 15 unit tests added.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>