mirror of
https://github.com/bytedance/deer-flow.git
synced 2026-04-25 11:18:22 +00:00
feat: replace auto-admin creation with secure interactive first-boot setup (#2063)
* feat(persistence): add unified persistence layer with event store, token tracking, and feedback (#1930) * feat(persistence): add SQLAlchemy 2.0 async ORM scaffold Introduce a unified database configuration (DatabaseConfig) that controls both the LangGraph checkpointer and the DeerFlow application persistence layer from a single `database:` config section. New modules: - deerflow.config.database_config — Pydantic config with memory/sqlite/postgres backends - deerflow.persistence — async engine lifecycle, DeclarativeBase with to_dict mixin, Alembic skeleton - deerflow.runtime.runs.store — RunStore ABC + MemoryRunStore implementation Gateway integration initializes/tears down the persistence engine in the existing langgraph_runtime() context manager. Legacy checkpointer config is preserved for backward compatibility. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(persistence): add RunEventStore ABC + MemoryRunEventStore Phase 2-A prerequisite for event storage: adds the unified run event stream interface (RunEventStore) with an in-memory implementation, RunEventsConfig, gateway integration, and comprehensive tests (27 cases). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(persistence): add ORM models, repositories, DB/JSONL event stores, RunJournal, and API endpoints Phase 2-B: run persistence + event storage + token tracking. - ORM models: RunRow (with token fields), ThreadMetaRow, RunEventRow - RunRepository implements RunStore ABC via SQLAlchemy ORM - ThreadMetaRepository with owner access control - DbRunEventStore with trace content truncation and cursor pagination - JsonlRunEventStore with per-run files and seq recovery from disk - RunJournal (BaseCallbackHandler) captures LLM/tool/lifecycle events, accumulates token usage by caller type, buffers and flushes to store - RunManager now accepts optional RunStore for persistent backing - Worker creates RunJournal, writes human_message, injects callbacks - Gateway deps use factory functions (RunRepository when DB available) - New endpoints: messages, run messages, run events, token-usage - ThreadCreateRequest gains assistant_id field - 92 tests pass (33 new), zero regressions Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(persistence): add user feedback + follow-up run association Phase 2-C: feedback and follow-up tracking. - FeedbackRow ORM model (rating +1/-1, optional message_id, comment) - FeedbackRepository with CRUD, list_by_run/thread, aggregate stats - Feedback API endpoints: create, list, stats, delete - follow_up_to_run_id in RunCreateRequest (explicit or auto-detected from latest successful run on the thread) - Worker writes follow_up_to_run_id into human_message event metadata - Gateway deps: feedback_repo factory + getter - 17 new tests (14 FeedbackRepository + 3 follow-up association) - 109 total tests pass, zero regressions Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test+config: comprehensive Phase 2 test coverage + deprecate checkpointer config - config.example.yaml: deprecate standalone checkpointer section, activate unified database:sqlite as default (drives both checkpointer + app data) - New: test_thread_meta_repo.py (14 tests) — full ThreadMetaRepository coverage including check_access owner logic, list_by_owner pagination - Extended test_run_repository.py (+4 tests) — completion preserves fields, list ordering desc, limit, owner_none returns all - Extended test_run_journal.py (+8 tests) — on_chain_error, track_tokens=false, middleware no ai_message, unknown caller tokens, convenience fields, tool_error, non-summarization custom event - Extended test_run_event_store.py (+7 tests) — DB batch seq continuity, make_run_event_store factory (memory/db/jsonl/fallback/unknown) - Extended test_phase2b_integration.py (+4 tests) — create_or_reject persists, follow-up metadata, summarization in history, full DB-backed lifecycle - Fixed DB integration test to use proper fake objects (not MagicMock) for JSON-serializable metadata - 157 total Phase 2 tests pass, zero regressions Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * config: move default sqlite_dir to .deer-flow/data Keep SQLite databases alongside other DeerFlow-managed data (threads, memory) under the .deer-flow/ directory instead of a top-level ./data folder. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(persistence): remove UTFJSON, use engine-level json_serializer + datetime.now() - Replace custom UTFJSON type with standard sqlalchemy.JSON in all ORM models. Add json_serializer=json.dumps(ensure_ascii=False) to all create_async_engine calls so non-ASCII text (Chinese etc.) is stored as-is in both SQLite and Postgres. - Change ORM datetime defaults from datetime.now(UTC) to datetime.now(), remove UTC imports. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(gateway): simplify deps.py with getter factory + inline repos - Replace 6 identical getter functions with _require() factory. - Inline 3 _make_*_repo() factories into langgraph_runtime(), call get_session_factory() once instead of 3 times. - Add thread_meta upsert in start_run (services.py). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(docker): add UV_EXTRAS build arg for optional dependencies Support installing optional dependency groups (e.g. postgres) at Docker build time via UV_EXTRAS build arg: UV_EXTRAS=postgres docker compose build Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(journal): fix flush, token tracking, and consolidate tests RunJournal fixes: - _flush_sync: retain events in buffer when no event loop instead of dropping them; worker's finally block flushes via async flush(). - on_llm_end: add tool_calls filter and caller=="lead_agent" guard for ai_message events; mark message IDs for dedup with record_llm_usage. - worker.py: persist completion data (tokens, message count) to RunStore in finally block. Model factory: - Auto-inject stream_usage=True for BaseChatOpenAI subclasses with custom api_base, so usage_metadata is populated in streaming responses. Test consolidation: - Delete test_phase2b_integration.py (redundant with existing tests). - Move DB-backed lifecycle test into test_run_journal.py. - Add tests for stream_usage injection in test_model_factory.py. - Clean up executor/task_tool dead journal references. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(events): widen content type to str|dict in all store backends Allow event content to be a dict (for structured OpenAI-format messages) in addition to plain strings. Dict values are JSON-serialized for the DB backend and deserialized on read; memory and JSONL backends handle dicts natively. Trace truncation now serializes dicts to JSON before measuring. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(events): use metadata flag instead of heuristic for dict content detection Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(converters): add LangChain-to-OpenAI message format converters Pure functions langchain_to_openai_message, langchain_to_openai_completion, langchain_messages_to_openai, and _infer_finish_reason for converting LangChain BaseMessage objects to OpenAI Chat Completions format, used by RunJournal for event storage. 15 unit tests added. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(converters): handle empty list content as null, clean up test Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(events): human_message content uses OpenAI user message format Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(events): ai_message uses OpenAI format, add ai_tool_call message event - ai_message content now uses {"role": "assistant", "content": "..."} format - New ai_tool_call message event emitted when lead_agent LLM responds with tool_calls - ai_tool_call uses langchain_to_openai_message converter for consistent format - Both events include finish_reason in metadata ("stop" or "tool_calls") Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(events): add tool_result message event with OpenAI tool message format Cache tool_call_id from on_tool_start keyed by run_id as fallback for on_tool_end, then emit a tool_result message event (role=tool, tool_call_id, content) after each successful tool completion. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(events): summary content uses OpenAI system message format Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(events): replace llm_start/llm_end with llm_request/llm_response in OpenAI format Add on_chat_model_start to capture structured prompt messages as llm_request events. Replace llm_end trace events with llm_response using OpenAI Chat Completions format. Track llm_call_index to pair request/response events. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(events): add record_middleware method for middleware trace events Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test(events): add full run sequence integration test for OpenAI content format Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(events): align message events with checkpoint format and add middleware tag injection - Message events (ai_message, ai_tool_call, tool_result, human_message) now use BaseMessage.model_dump() format, matching LangGraph checkpoint values.messages - on_tool_end extracts tool_call_id/name/status from ToolMessage objects - on_tool_error now emits tool_result message events with error status - record_middleware uses middleware:{tag} event_type and middleware category - Summarization custom events use middleware:summarize category - TitleMiddleware injects middleware:title tag via get_config() inheritance - SummarizationMiddleware model bound with middleware:summarize tag - Worker writes human_message using HumanMessage.model_dump() Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(threads): switch search endpoint to threads_meta table and sync title - POST /api/threads/search now queries threads_meta table directly, removing the two-phase Store + Checkpointer scan approach - Add ThreadMetaRepository.search() with metadata/status filters - Add ThreadMetaRepository.update_display_name() for title sync - Worker syncs checkpoint title to threads_meta.display_name on run completion - Map display_name to values.title in search response for API compatibility Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(threads): history endpoint reads messages from event store - POST /api/threads/{thread_id}/history now combines two data sources: checkpointer for checkpoint_id, metadata, title, thread_data; event store for messages (complete history, not truncated by summarization) - Strip internal LangGraph metadata keys from response - Remove full channel_values serialization in favor of selective fields Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: remove duplicate optional-dependencies header in pyproject.toml Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(middleware): pass tagged config to TitleMiddleware ainvoke call Without the config, the middleware:title tag was not injected, causing the LLM response to be recorded as a lead_agent ai_message in run_events. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: resolve merge conflict in .env.example Keep both DATABASE_URL (from persistence-scaffold) and WECOM credentials (from main) after the merge. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(persistence): address review feedback on PR #1851 - Fix naive datetime.now() → datetime.now(UTC) in all ORM models - Fix seq race condition in DbRunEventStore.put() with FOR UPDATE and UNIQUE(thread_id, seq) constraint - Encapsulate _store access in RunManager.update_run_completion() - Deduplicate _store.put() logic in RunManager via _persist_to_store() - Add update_run_completion to RunStore ABC + MemoryRunStore - Wire follow_up_to_run_id through the full create path - Add error recovery to RunJournal._flush_sync() lost-event scenario - Add migration note for search_threads breaking change - Fix test_checkpointer_none_fix mock to set database=None Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: update uv.lock Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(persistence): address 22 review comments from CodeQL, Copilot, and Code Quality Bug fixes: - Sanitize log params to prevent log injection (CodeQL) - Reset threads_meta.status to idle/error when run completes - Attach messages only to latest checkpoint in /history response - Write threads_meta on POST /threads so new threads appear in search Lint fixes: - Remove unused imports (journal.py, migrations/env.py, test_converters.py) - Convert lambda to named function (engine.py, Ruff E731) - Remove unused logger definitions in repos (Ruff F841) - Add logging to JSONL decode errors and empty except blocks - Separate assert side-effects in tests (CodeQL) - Remove unused local variables in tests (Ruff F841) - Fix max_trace_content truncation to use byte length, not char length Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * style: apply ruff format to persistence and runtime files Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Potential fix for pull request finding 'Statement has no effect' Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com> * refactor(runtime): introduce RunContext to reduce run_agent parameter bloat Extract checkpointer, store, event_store, run_events_config, thread_meta_repo, and follow_up_to_run_id into a frozen RunContext dataclass. Add get_run_context() in deps.py to build the base context from app.state singletons. start_run() uses dataclasses.replace() to enrich per-run fields before passing ctx to run_agent. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(gateway): move sanitize_log_param to app/gateway/utils.py Extract the log-injection sanitizer from routers/threads.py into a shared utils module and rename to sanitize_log_param (public API). Eliminates the reverse service → router import in services.py. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * perf: use SQL aggregation for feedback stats and thread token usage Replace Python-side counting in FeedbackRepository.aggregate_by_run with a single SELECT COUNT/SUM query. Add RunStore.aggregate_tokens_by_thread abstract method with SQL GROUP BY implementation in RunRepository and Python fallback in MemoryRunStore. Simplify the thread_token_usage endpoint to delegate to the new method, eliminating the limit=10000 truncation risk. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: annotate DbRunEventStore.put() as low-frequency path Add docstring clarifying that put() opens a per-call transaction with FOR UPDATE and should only be used for infrequent writes (currently just the initial human_message event). High-throughput callers should use put_batch() instead. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(threads): fall back to Store search when ThreadMetaRepository is unavailable When database.backend=memory (default) or no SQL session factory is configured, search_threads now queries the LangGraph Store instead of returning 503. Returns empty list if neither Store nor repo is available. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(persistence): introduce ThreadMetaStore ABC for backend-agnostic thread metadata Add ThreadMetaStore abstract base class with create/get/search/update/delete interface. ThreadMetaRepository (SQL) now inherits from it. New MemoryThreadMetaStore wraps LangGraph BaseStore for memory-mode deployments. deps.py now always provides a non-None thread_meta_repo, eliminating all `if thread_meta_repo is not None` guards in services.py, worker.py, and routers/threads.py. search_threads no longer needs a Store fallback branch. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(history): read messages from checkpointer instead of RunEventStore The /history endpoint now reads messages directly from the checkpointer's channel_values (the authoritative source) instead of querying RunEventStore.list_messages(). The RunEventStore API is preserved for other consumers. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(persistence): address new Copilot review comments - feedback.py: validate thread_id/run_id before deleting feedback - jsonl.py: add path traversal protection with ID validation - run_repo.py: parse `before` to datetime for PostgreSQL compat - thread_meta_repo.py: fix pagination when metadata filter is active - database_config.py: use resolve_path for sqlite_dir consistency Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Implement skill self-evolution and skill_manage flow (#1874) * chore: ignore .worktrees directory * Add skill_manage self-evolution flow * Fix CI regressions for skill_manage * Address PR review feedback for skill evolution * fix(skill-evolution): preserve history on delete * fix(skill-evolution): tighten scanner fallbacks * docs: add skill_manage e2e evidence screenshot * fix(skill-manage): avoid blocking fs ops in session runtime --------- Co-authored-by: Willem Jiang <willem.jiang@gmail.com> * fix(config): resolve sqlite_dir relative to CWD, not Paths.base_dir resolve_path() resolves relative to Paths.base_dir (.deer-flow), which double-nested the path to .deer-flow/.deer-flow/data/app.db. Use Path.resolve() (CWD-relative) instead. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Feature/feishu receive file (#1608) * feat(feishu): add channel file materialization hook for inbound messages - Introduce Channel.receive_file(msg, thread_id) as a base method for file materialization; default is no-op. - Implement FeishuChannel.receive_file to download files/images from Feishu messages, save to sandbox, and inject virtual paths into msg.text. - Update ChannelManager to call receive_file for any channel if msg.files is present, enabling downstream model access to user-uploaded files. - No impact on Slack/Telegram or other channels (they inherit the default no-op). * style(backend): format code with ruff for lint compliance - Auto-formatted packages/harness/deerflow/agents/factory.py and tests/test_create_deerflow_agent.py using `ruff format` - Ensured both files conform to project linting standards - Fixes CI lint check failures caused by code style issues * fix(feishu): handle file write operation asynchronously to prevent blocking * fix(feishu): rename GetMessageResourceRequest to _GetMessageResourceRequest and remove redundant code * test(feishu): add tests for receive_file method and placeholder replacement * fix(manager): remove unnecessary type casting for channel retrieval * fix(feishu): update logging messages to reflect resource handling instead of image * fix(feishu): sanitize filename by replacing invalid characters in file uploads * fix(feishu): improve filename sanitization and reorder image key handling in message processing * fix(feishu): add thread lock to prevent filename conflicts during file downloads * fix(test): correct bad merge in test_feishu_parser.py * chore: run ruff and apply formatting cleanup fix(feishu): preserve rich-text attachment order and improve fallback filename handling * fix(docker): restore gateway env vars and fix langgraph empty arg issue (#1915) Two production docker-compose.yaml bugs prevent `make up` from working: 1. Gateway missing DEER_FLOW_CONFIG_PATH and DEER_FLOW_EXTENSIONS_CONFIG_PATH environment overrides. Added in fb2d99f (#1836) but accidentally reverted by ca2fb95 (#1847). Without them, gateway reads host paths from .env via env_file, causing FileNotFoundError inside the container. 2. Langgraph command fails when LANGGRAPH_ALLOW_BLOCKING is unset (default). Empty $${allow_blocking} inserts a bare space between flags, causing ' --no-reload' to be parsed as unexpected extra argument. Fix by building args string first and conditionally appending --allow-blocking. Co-authored-by: cooper <cooperfu@tencent.com> * fix(frontend): resolve invalid HTML nesting and tabnabbing vulnerabilities (#1904) * fix(frontend): resolve invalid HTML nesting and tabnabbing vulnerabilities Fix `<button>` inside `<a>` invalid HTML in artifact components and add missing `noopener,noreferrer` to `window.open` calls to prevent reverse tabnabbing. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(frontend): address Copilot review on tabnabbing and double-tab-open Remove redundant parent onClick on web_fetch ChainOfThoughtStep to prevent opening two tabs on link click, and explicitly null out window.opener after window.open() for defensive tabnabbing hardening. --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * refactor(persistence): organize entities into per-entity directories Restructure the persistence layer from horizontal "models/ + repositories/" split into vertical entity-aligned directories. Each entity (thread_meta, run, feedback) now owns its ORM model, abstract interface (where applicable), and concrete implementations under a single directory with an aggregating __init__.py for one-line imports. Layout: persistence/thread_meta/{base,model,sql,memory}.py persistence/run/{model,sql}.py persistence/feedback/{model,sql}.py models/__init__.py is kept as a facade so Alembic autogenerate continues to discover all ORM tables via Base.metadata. RunEventRow remains under models/run_event.py because its storage implementation lives in runtime/events/store/db.py and has no matching repository directory. The repositories/ directory is removed entirely. All call sites in gateway/deps.py and tests are updated to import from the new entity packages, e.g.: from deerflow.persistence.thread_meta import ThreadMetaRepository from deerflow.persistence.run import RunRepository from deerflow.persistence.feedback import FeedbackRepository Full test suite passes (1690 passed, 14 skipped). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(gateway): sync thread rename and delete through ThreadMetaStore The POST /threads/{id}/state endpoint previously synced title changes only to the LangGraph Store via _store_upsert. In sqlite mode the search endpoint reads from the ThreadMetaRepository SQL table, so renames never appeared in /threads/search until the next agent run completed (worker.py syncs title from checkpoint to thread_meta in its finally block). Likewise the DELETE /threads/{id} endpoint cleaned up the filesystem, Store, and checkpointer but left the threads_meta row orphaned in sqlite, so deleted threads kept appearing in /threads/search. Fix both endpoints by routing through the ThreadMetaStore abstraction which already has the correct sqlite/memory implementations wired up by deps.py. The rename path now calls update_display_name() and the delete path calls delete() — both work uniformly across backends. Verified end-to-end with curl in gateway mode against sqlite backend. Existing test suite (1690 passed) and focused router/repo tests pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(gateway): route all thread metadata access through ThreadMetaStore Following the rename/delete bug fix in PR1, migrate the remaining direct LangGraph Store reads/writes in the threads router and services to the ThreadMetaStore abstraction so that the sqlite and memory backends behave identically and the legacy dual-write paths can be removed. Migrated endpoints (threads.py): - create_thread: idempotency check + write now use thread_meta_repo.get/create instead of dual-writing the LangGraph Store and the SQL row. - get_thread: reads from thread_meta_repo.get; the checkpoint-only fallback for legacy threads is preserved. - patch_thread: replaced _store_get/_store_put with thread_meta_repo.update_metadata. - delete_thread_data: dropped the legacy store.adelete; thread_meta_repo.delete already covers it. Removed dead code (services.py): - _upsert_thread_in_store — redundant with the immediately following thread_meta_repo.create() call. - _sync_thread_title_after_run — worker.py's finally block already syncs the title via thread_meta_repo.update_display_name() after each run. Removed dead code (threads.py): - _store_get / _store_put / _store_upsert helpers (no remaining callers). - THREADS_NS constant. - get_store import (router no longer touches the LangGraph Store directly). New abstract method: - ThreadMetaStore.update_metadata(thread_id, metadata) merges metadata into the thread's metadata field. Implemented in both ThreadMetaRepository (SQL, read-modify-write inside one session) and MemoryThreadMetaStore. Three new unit tests cover merge / empty / nonexistent behaviour. Net change: -134 lines. Full test suite: 1693 passed, 14 skipped. Verified end-to-end with curl in gateway mode against sqlite backend (create / patch / get / rename / search / delete). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com> Co-authored-by: DanielWalnut <45447813+hetaoBackend@users.noreply.github.com> Co-authored-by: Willem Jiang <willem.jiang@gmail.com> Co-authored-by: JilongSun <965640067@qq.com> Co-authored-by: jie <49781832+stan-fu@users.noreply.github.com> Co-authored-by: cooper <cooperfu@tencent.com> Co-authored-by: yangzheli <43645580+yangzheli@users.noreply.github.com> * feat(auth): release-validation pass for 2.0-rc — 12 blockers + simplify follow-ups (#2008) * feat(auth): introduce backend auth module Port RFC-001 authentication core from PR #1728: - JWT token handling (create_access_token, decode_token, TokenPayload) - Password hashing (bcrypt) with verify_password - SQLite UserRepository with base interface - Provider Factory pattern (LocalAuthProvider) - CLI reset_admin tool - Auth-specific errors (AuthErrorCode, TokenError, AuthErrorResponse) Deps: - bcrypt>=4.0.0 - pyjwt>=2.9.0 - email-validator>=2.0.0 - backend/uv.toml pins public PyPI index Tests: 12 pure unit tests (test_auth_config.py, test_auth_errors.py). Scope note: authz.py, test_auth.py, and test_auth_type_system.py are deferred to commit 2 because they depend on middleware and deps wiring that is not yet in place. Commit 1 stays "pure new files only" as the spec mandates. * feat(auth): wire auth end-to-end (middleware + frontend replacement) Backend: - Port auth_middleware, csrf_middleware, langgraph_auth, routers/auth - Port authz decorator (owner_filter_key defaults to 'owner_id') - Merge app.py: register AuthMiddleware + CSRFMiddleware + CORS, add _ensure_admin_user lifespan hook, _migrate_orphaned_threads helper, register auth router - Merge deps.py: add get_local_provider, get_current_user_from_request, get_optional_user_from_request; keep get_current_user as thin str|None adapter for feedback router - langgraph.json: add auth path pointing to langgraph_auth.py:auth - Rename metadata['user_id'] -> metadata['owner_id'] in langgraph_auth (both metadata write and LangGraph filter dict) + test fixtures Frontend: - Delete better-auth library and api catch-all route - Remove better-auth npm dependency and env vars (BETTER_AUTH_SECRET, BETTER_AUTH_GITHUB_*) from env.js - Port frontend/src/core/auth/* (AuthProvider, gateway-config, proxy-policy, server-side getServerSideUser, types) - Port frontend/src/core/api/fetcher.ts - Port (auth)/layout, (auth)/login, (auth)/setup pages - Rewrite workspace/layout.tsx as server component that calls getServerSideUser and wraps in AuthProvider - Port workspace/workspace-content.tsx for the client-side sidebar logic Tests: - Port 5 auth test files (test_auth, test_auth_middleware, test_auth_type_system, test_ensure_admin, test_langgraph_auth) - 176 auth tests PASS After this commit: login/logout/registration flow works, but persistence layer does not yet filter by owner_id. Commit 4 closes that gap. * feat(auth): account settings page + i18n - Port account-settings-page.tsx (change password, change email, logout) - Wire into settings-dialog.tsx as new "account" section with UserIcon, rendered first in the section list - Add i18n keys: - en-US/zh-CN: settings.sections.account ("Account" / "账号") - en-US/zh-CN: button.logout ("Log out" / "退出登录") - types.ts: matching type declarations * feat(auth): enforce owner_id across 2.0-rc persistence layer Add request-scoped contextvar-based owner filtering to threads_meta, runs, run_events, and feedback repositories. Router code is unchanged — isolation is enforced at the storage layer so that any caller that forgets to pass owner_id still gets filtered results, and new routes cannot accidentally leak data. Core infrastructure ------------------- - deerflow/runtime/user_context.py (new): - ContextVar[CurrentUser | None] with default None - runtime_checkable CurrentUser Protocol (structural subtype with .id) - set/reset/get/require helpers - AUTO sentinel + resolve_owner_id(value, method_name) for sentinel three-state resolution: AUTO reads contextvar, explicit str overrides, explicit None bypasses the filter (for migration/CLI) Repository changes ------------------ - ThreadMetaRepository: create/get/search/update_*/delete gain owner_id=AUTO kwarg; read paths filter by owner, writes stamp it, mutations check ownership before applying - RunRepository: put/get/list_by_thread/delete gain owner_id=AUTO kwarg - FeedbackRepository: create/get/list_by_run/list_by_thread/delete gain owner_id=AUTO kwarg - DbRunEventStore: list_messages/list_events/list_messages_by_run/ count_messages/delete_by_thread/delete_by_run gain owner_id=AUTO kwarg. Write paths (put/put_batch) read contextvar softly: when a request-scoped user is available, owner_id is stamped; background worker writes without a user context pass None which is valid (orphan row to be bound by migration) Schema ------ - persistence/models/run_event.py: RunEventRow.owner_id = Mapped[ str | None] = mapped_column(String(64), nullable=True, index=True) - No alembic migration needed: 2.0 ships fresh, Base.metadata.create_all picks up the new column automatically Middleware ---------- - auth_middleware.py: after cookie check, call get_optional_user_from_ request to load the real User, stamp it into request.state.user AND the contextvar via set_current_user, reset in a try/finally. Public paths and unauthenticated requests continue without contextvar, and @require_auth handles the strict 401 path Test infrastructure ------------------- - tests/conftest.py: @pytest.fixture(autouse=True) _auto_user_context sets a default SimpleNamespace(id="test-user-autouse") on every test unless marked @pytest.mark.no_auto_user. Keeps existing 20+ persistence tests passing without modification - pyproject.toml [tool.pytest.ini_options]: register no_auto_user marker so pytest does not emit warnings for opt-out tests - tests/test_user_context.py: 6 tests covering three-state semantics, Protocol duck typing, and require/optional APIs - tests/test_thread_meta_repo.py: one test updated to pass owner_id= None explicitly where it was previously relying on the old default Test results ------------ - test_user_context.py: 6 passed - test_auth*.py + test_langgraph_auth.py + test_ensure_admin.py: 127 - test_run_event_store / test_run_repository / test_thread_meta_repo / test_feedback: 92 passed - Full backend suite: 1905 passed, 2 failed (both @requires_llm flaky integration tests unrelated to auth), 1 skipped * feat(auth): extend orphan migration to 2.0-rc persistence tables _ensure_admin_user now runs a three-step pipeline on every boot: Step 1 (fatal): admin user exists / is created / password is reset Step 2 (non-fatal): LangGraph store orphan threads → admin Step 3 (non-fatal): SQL persistence tables → admin - threads_meta - runs - run_events - feedback Each step is idempotent. The fatal/non-fatal split mirrors PR #1728's original philosophy: admin creation failure blocks startup (the system is unusable without an admin), whereas migration failures log a warning and let the service proceed (a partial migration is recoverable; a missing admin is not). Key helpers ----------- - _iter_store_items(store, namespace, *, page_size=500): async generator that cursor-paginates across LangGraph store pages. Fixes PR #1728's hardcoded limit=1000 bug that would silently lose orphans beyond the first page. - _migrate_orphaned_threads(store, admin_user_id): Rewritten to use _iter_store_items. Returns the migrated count so the caller can log it; raises only on unhandled exceptions. - _migrate_orphan_sql_tables(admin_user_id): Imports the 4 ORM models lazily, grabs the shared session factory, runs one UPDATE per table in a single transaction, commits once. No-op when no persistence backend is configured (in-memory dev). Tests: test_ensure_admin.py (8 passed) * test(auth): port AUTH test plan docs + lint/format pass - Port backend/docs/AUTH_TEST_PLAN.md and AUTH_UPGRADE.md from PR #1728 - Rename metadata.user_id → metadata.owner_id in AUTH_TEST_PLAN.md (4 occurrences from the original PR doc) - ruff auto-fix UP037 in sentinel type annotations: drop quotes around "str | None | _AutoSentinel" now that from __future__ import annotations makes them implicit string forms - ruff format: 2 files (app/gateway/app.py, runtime/user_context.py) Note on test coverage additions: - conftest.py autouse fixture was already added in commit 4 (had to be co-located with the repository changes to keep pre-existing persistence tests passing) - cross-user isolation E2E tests (test_owner_isolation.py) deferred — enforcement is already proven by the 98-test repository suite via the autouse fixture + explicit _AUTO sentinel exercises - New test cases (TC-API-17..20, TC-ATK-13, TC-MIG-01..07) listed in AUTH_TEST_PLAN.md are deferred to a follow-up PR — they are manual-QA test cases rather than pytest code, and the spec-level coverage is already met by test_user_context.py + the 98-test repository suite. Final test results: - Auth suite (test_auth*, test_langgraph_auth, test_ensure_admin, test_user_context): 186 passed - Persistence suite (test_run_event_store, test_run_repository, test_thread_meta_repo, test_feedback): 98 passed - Lint: ruff check + ruff format both clean * test(auth): add cross-user isolation test suite 10 tests exercising the storage-layer owner filter by manually switching the user_context contextvar between two users. Verifies the safety invariant: After a repository write with owner_id=A, a subsequent read with owner_id=B must not return the row, and vice versa. Covers all 4 tables that own user-scoped data: TC-API-17 threads_meta — read, search, update, delete cross-user TC-API-18 runs — get, list_by_thread, delete cross-user TC-API-19 run_events — list_messages, list_events, count_messages, delete_by_thread (CRITICAL: raw conversation content leak vector) TC-API-20 feedback — get, list_by_run, delete cross-user Plus two meta-tests verifying the sentinel pattern itself: - AUTO + unset contextvar raises RuntimeError - explicit owner_id=None bypasses the filter (migration escape hatch) Architecture note ----------------- These tests bypass the HTTP layer by design. The full chain (cookie → middleware → contextvar → repository) is covered piecewise: - test_auth_middleware.py: middleware sets contextvar from cookies - test_owner_isolation.py: repositories enforce isolation when contextvar is set to different users Together they prove the end-to-end safety property without the ceremony of spinning up a full TestClient + in-memory DB for every router endpoint. Tests pass: 231 (full auth + persistence + isolation suite) Lint: clean * refactor(auth): migrate user repository to SQLAlchemy ORM Move the users table into the shared persistence engine so auth matches the pattern of threads_meta, runs, run_events, and feedback — one engine, one session factory, one schema init codepath. New files --------- - persistence/user/__init__.py, persistence/user/model.py: UserRow ORM class with partial unique index on (oauth_provider, oauth_id) - Registered in persistence/models/__init__.py so Base.metadata.create_all() picks it up Modified -------- - auth/repositories/sqlite.py: rewritten as async SQLAlchemy, identical constructor pattern to the other four repositories (def __init__(self, session_factory) + self._sf = session_factory) - auth/config.py: drop users_db_path field — storage is configured through config.database like every other table - deps.py/get_local_provider: construct SQLiteUserRepository with the shared session factory, fail fast if engine is not initialised - tests/test_auth.py: rewrite test_sqlite_round_trip_new_fields to use the shared engine (init_engine + close_engine in a tempdir) - tests/test_auth_type_system.py: add per-test autouse fixture that spins up a scratch engine and resets deps._cached_* singletons * refactor(auth): remove SQL orphan migration (unused in supported scenarios) The _migrate_orphan_sql_tables helper existed to bind NULL owner_id rows in threads_meta, runs, run_events, and feedback to the admin on first boot. But in every supported upgrade path, it's a no-op: 1. Fresh install: create_all builds fresh tables, no legacy rows 2. No-auth → with-auth (no existing persistence DB): persistence tables are created fresh by create_all, no legacy rows 3. No-auth → with-auth (has existing persistence DB from #1930): NOT a supported upgrade path — "有 DB 到有 DB" schema evolution is out of scope; users wipe DB or run manual ALTER So the SQL orphan migration never has anything to do in the supported matrix. Delete the function, simplify _ensure_admin_user from a 3-step pipeline to a 2-step one (admin creation + LangGraph store orphan migration only). LangGraph store orphan migration stays: it serves the real "no-auth → with-auth" upgrade path where a user's existing LangGraph thread metadata has no owner_id field and needs to be stamped with the newly-created admin's id. Tests: 284 passed (auth + persistence + isolation) Lint: clean * security(auth): write initial admin password to 0600 file instead of logs CodeQL py/clear-text-logging-sensitive-data flagged 3 call sites that logged the auto-generated admin password to stdout via logger.info(). Production log aggregators (ELK/Splunk/etc) would have captured those cleartext secrets. Replace with a shared helper that writes to .deer-flow/admin_initial_credentials.txt with mode 0600, and log only the path. New file -------- - app/gateway/auth/credential_file.py: write_initial_credentials() helper. Takes email, password, and a "initial"/"reset" label. Creates .deer-flow/ if missing, writes a header comment plus the email+password, chmods 0o600, returns the absolute Path. Modified -------- - app/gateway/app.py: both _ensure_admin_user paths (fresh creation + needs_setup password reset) now write to file and log the path - app/gateway/auth/reset_admin.py: rewritten to use the shared ORM repo (SQLiteUserRepository with session_factory) and the credential_file helper. The previous implementation was broken after the earlier ORM refactor — it still imported _get_users_conn and constructed SQLiteUserRepository() without a session factory. No tests changed — the three password-log sites are all exercised via existing test_ensure_admin.py which checks that startup succeeds, not that a specific string appears in logs. CodeQL alerts 272, 283, 284: all resolved. * security(auth): strict JWT validation in middleware (fix junk cookie bypass) AUTH_TEST_PLAN test 7.5.8 expects junk cookies to be rejected with 401. The previous middleware behaviour was "presence-only": check that some access_token cookie exists, then pass through. In combination with my Task-12 decision to skip @require_auth decorators on routes, this created a gap where a request with any cookie-shaped string (e.g. access_token=not-a-jwt) would bypass authentication on routes that do not touch the repository (/api/models, /api/mcp/config, /api/memory, /api/skills, …). Fix: middleware now calls get_current_user_from_request() strictly and catches the resulting HTTPException to render a 401 with the proper fine-grained error code (token_invalid, token_expired, user_not_found, …). On success it stamps request.state.user and the contextvar so repository-layer owner filters work downstream. The 4 old "_with_cookie_passes" tests in test_auth_middleware.py were written for the presence-only behaviour; they asserted that a junk cookie would make the handler return 200. They are renamed to "_with_junk_cookie_rejected" and their assertions flipped to 401. The negative path (no cookie → 401 not_authenticated) is unchanged. Verified: no cookie → 401 not_authenticated junk cookie → 401 token_invalid (the fixed bug) expired cookie → 401 token_expired Tests: 284 passed (auth + persistence + isolation) Lint: clean * security(auth): wire @require_permission(owner_check=True) on isolation routes Apply the require_permission decorator to all 28 routes that take a {thread_id} path parameter. Combined with the strict middleware (previous commit), this gives the double-layer protection that AUTH_TEST_PLAN test 7.5.9 documents: Layer 1 (AuthMiddleware): cookie + JWT validation, rejects junk cookies and stamps request.state.user Layer 2 (@require_permission with owner_check=True): per-resource ownership verification via ThreadMetaStore.check_access — returns 404 if a different user owns the thread The decorator's owner_check branch is rewritten to use the SQL thread_meta_repo (the 2.0-rc persistence layer) instead of the LangGraph store path that PR #1728 used (_store_get / get_store in routers/threads.py). The inject_record convenience is dropped — no caller in 2.0 needs the LangGraph blob, and the SQL repo has a different shape. Routes decorated (28 total): - threads.py: delete, patch, get, get-state, post-state, post-history - thread_runs.py: post-runs, post-runs-stream, post-runs-wait, list_runs, get_run, cancel_run, join_run, stream_existing_run, list_thread_messages, list_run_messages, list_run_events, thread_token_usage - feedback.py: create, list, stats, delete - uploads.py: upload (added Request param), list, delete - artifacts.py: get_artifact - suggestions.py: generate (renamed body parameter to avoid conflict with FastAPI Request) Test fixes: - test_suggestions_router.py: bypass the decorator via __wrapped__ (the unit tests cover parsing logic, not auth — no point spinning up a thread_meta_repo just to test JSON unwrapping) - test_auth_middleware.py 4 fake-cookie tests: already updated in the previous commit (745bf432) Tests: 293 passed (auth + persistence + isolation + suggestions) Lint: clean * security(auth): defense-in-depth fixes from release validation pass Eight findings caught while running the AUTH_TEST_PLAN end-to-end against the deployed sg_dev stack. Each is a pre-condition for shipping release/2.0-rc that the previous PRs missed. Backend hardening - routers/auth.py: rate limiter X-Real-IP now requires AUTH_TRUSTED_PROXIES whitelist (CIDR/IP allowlist). Without nginx in front, the previous code honored arbitrary X-Real-IP, letting an attacker rotate the header to fully bypass the per-IP login lockout. - routers/auth.py: 36-entry common-password blocklist via Pydantic field_validator on RegisterRequest + ChangePasswordRequest. The shared _validate_strong_password helper keeps the constraint in one place. - routers/threads.py: ThreadCreateRequest + ThreadPatchRequest strip server-reserved metadata keys (owner_id, user_id) via Pydantic field_validator so a forged value can never round-trip back to other clients reading the same thread. The actual ownership invariant stays on the threads_meta row; this closes the metadata-blob echo gap. - authz.py + thread_meta/sql.py: require_permission gains a require_existing flag plumbed through check_access(require_existing=True). Destructive routes (DELETE/PATCH/state-update/runs/feedback) now treat a missing thread_meta row as 404 instead of "untracked legacy thread, allow", closing the cross-user delete-idempotence gap where any user could successfully DELETE another user's deleted thread. - repositories/sqlite.py + base.py: update_user raises UserNotFoundError on a vanished row instead of silently returning the input. Concurrent delete during password reset can no longer look like a successful update. - runtime/user_context.py: resolve_owner_id() coerces User.id (UUID) to str at the contextvar boundary so SQLAlchemy String(64) columns can bind it. The whole 2.0-rc isolation pipeline was previously broken end-to-end (POST /api/threads → 500 "type 'UUID' is not supported"). - persistence/engine.py: SQLAlchemy listener enables PRAGMA journal_mode=WAL, synchronous=NORMAL, foreign_keys=ON on every new SQLite connection. TC-UPG-06 in the test plan expects WAL; previous code shipped with the default 'delete' journal. - auth_middleware.py: stamp request.state.auth = AuthContext(...) so @require_permission's short-circuit fires; previously every isolation request did a duplicate JWT decode + users SELECT. Also unifies the 401 payload through AuthErrorResponse(...).model_dump(). - app.py: _ensure_admin_user restructure removes the noqa F821 scoping bug where 'password' was referenced outside the branch that defined it. New _announce_credentials helper absorbs the duplicate log block in the fresh-admin and reset-admin branches. * fix(frontend+nginx): rollout CSRF on every state-changing client path The frontend was 100% broken in gateway-pro mode for any user trying to open a specific chat thread. Three cumulative bugs each silently masked the next. LangGraph SDK CSRF gap (api-client.ts) - The Client constructor took only apiUrl, no defaultHeaders, no fetch interceptor. The SDK's internal fetch never sent X-CSRF-Token, so every state-changing /api/langgraph-compat/* call (runs/stream, threads/search, threads/{tid}/history, ...) hit CSRFMiddleware and got 403 before reaching the auth check. UI symptom: empty thread page with no error message; the SPA's hooks swallowed the rejection. - Fix: pass an onRequest hook that injects X-CSRF-Token from the csrf_token cookie per request. Reading the cookie per call (not at construction time) handles login / logout / password-change cookie rotation transparently. The SDK's prepareFetchOptions calls onRequest for both regular requests AND streaming/SSE/reconnect, so the same hook covers runs.stream and runs.joinStream. Raw fetch CSRF gap (7 files) - Audit: 11 frontend fetch sites, only 2 included CSRF (login/setup + account-settings change-password). The other 7 routed through raw fetch() with no header — suggestions, memory, agents, mcp, skills, uploads, and the local thread cleanup hook all 403'd silently. - Fix: enhance fetcher.ts:fetchWithAuth to auto-inject X-CSRF-Token on POST/PUT/DELETE/PATCH from a single shared readCsrfCookie() helper. Convert all 7 raw fetch() callers to fetchWithAuth so the contract is centrally enforced. api-client.ts and fetcher.ts share readCsrfCookie + STATE_CHANGING_METHODS to avoid drift. nginx routing + buffering (nginx.local.conf) - The auth feature shipped without updating the nginx config: per-API explicit location blocks but no /api/v1/auth/, /api/feedback, /api/runs. The frontend's client-side fetches to /api/v1/auth/login/local 404'd from the Next.js side because nginx routed /api/* to the frontend. - Fix: add catch-all `location /api/` that proxies to the gateway. nginx longest-prefix matching keeps the explicit blocks (/api/models, /api/threads regex, /api/langgraph/, ...) winning for their paths. - Fix: disable proxy_buffering + proxy_request_buffering for the frontend `location /` block. Without it, nginx tries to spool large Next.js chunks into /var/lib/nginx/proxy (root-owned) and fails with Permission denied → ERR_INCOMPLETE_CHUNKED_ENCODING → ChunkLoadError. * test(auth): release-validation test infra and new coverage Test fixtures and unit tests added during the validation pass. Router test helpers (NEW: tests/_router_auth_helpers.py) - make_authed_test_app(): builds a FastAPI test app with a stub middleware that stamps request.state.user + request.state.auth and a permissive thread_meta_repo mock. TestClient-based router tests (test_artifacts_router, test_threads_router) use it instead of bare FastAPI() so the new @require_permission(owner_check=True) decorators short-circuit cleanly. - call_unwrapped(): walks the __wrapped__ chain to invoke the underlying handler without going through the authz wrappers. Direct-call tests (test_uploads_router) use it. Typed with ParamSpec so the wrapped signature flows through. Backend test additions - test_auth.py: 7 tests for the new _get_client_ip trust model (no proxy / trusted proxy / untrusted peer / XFF rejection / invalid CIDR / no client). 5 tests for the password blocklist (literal, case-insensitive, strong password accepted, change-password binding, short-password length-check still fires before blocklist). test_update_user_raises_when_row_concurrently_deleted: closes a shipped-without-coverage gap on the new UserNotFoundError contract. - test_thread_meta_repo.py: 4 tests for check_access(require_existing=True) — strict missing-row denial, strict owner match, strict owner mismatch, strict null-owner still allowed (shared rows survive the tightening). - test_ensure_admin.py: 3 tests for _migrate_orphaned_threads / _iter_store_items pagination, covering the TC-UPG-02 upgrade story end-to-end via mock store. Closes the gap where the cursor pagination was untested even though the previous PR rewrote it. - test_threads_router.py: 5 tests for _strip_reserved_metadata (owner_id removal, user_id removal, safe-keys passthrough, empty input, both-stripped). - test_auth_type_system.py: replace "password123" fixtures with Tr0ub4dor3a / AnotherStr0ngPwd! so the new password blocklist doesn't reject the test data. * docs(auth): refresh TC-DOCKER-05 + document Docker validation gap - AUTH_TEST_PLAN.md TC-DOCKER-05: the previous expectation ("admin password visible in docker logs") was stale after the simplify pass that moved credentials to a 0600 file. The grep "Password:" check would have silently failed and given a false sense of coverage. New expectation matches the actual file-based path: 0600 file in DEER_FLOW_HOME, log shows the path (not the secret), reverse-grep asserts no leaked password in container logs. - NEW: docs/AUTH_TEST_DOCKER_GAP.md documents the only un-executed block in the test plan (TC-DOCKER-01..06). Reason: sg_dev validation host has no Docker daemon installed. The doc maps each Docker case to an already-validated bare-metal equivalent (TC-1.1, TC-REENT-01, TC-API-02 etc.) so the gap is auditable, and includes pre-flight reproduction steps for whoever has Docker available. --------- Co-authored-by: greatmengqi <chenmengqi.0376@bytedance.com> * feat: replace auto-admin creation with interactive setup flow On first boot, instead of auto-creating admin@deerflow.dev with a random password written to a credential file, DeerFlow now redirects to /setup where the user creates the admin account interactively. Backend: - Remove auto admin creation from _ensure_admin_user (now only runs orphan thread migration when an admin already exists) - Add POST /api/v1/auth/initialize endpoint (public, only callable when 0 users exist; auto-logs in after creation) - Add /api/v1/auth/initialize to public paths in auth_middleware.py and CSRF exempt paths in csrf_middleware.py - Update test_ensure_admin.py to match new behavior - Add test_initialize_admin.py with 8 tests for the new endpoint Frontend: - Add system_setup_required to AuthResult type - getServerSideUser() checks setup-status when unauthenticated - Auth layout allows system_setup_required (renders children) - Workspace layout redirects system_setup_required → /setup - Login page redirects to /setup when system not initialized - Setup page detects mode via isAuthenticated: unauth = create-admin form (calls /initialize), auth = change-password form (existing) Agent-Logs-Url: https://github.com/bytedance/deer-flow/sessions/9c2471c5-d6e9-4ada-9192-61b56007b8d7 Co-authored-by: foreleven <4785594+foreleven@users.noreply.github.com> * fix: add cleanup flags to useEffect async fetches in setup/login pages Agent-Logs-Url: https://github.com/bytedance/deer-flow/sessions/9c2471c5-d6e9-4ada-9192-61b56007b8d7 Co-authored-by: foreleven <4785594+foreleven@users.noreply.github.com> * fix: address reviewer feedback on /initialize endpoint security and robustness 1. Concurrency/register-blocking: switch setup-status and /initialize to check admin_count (via new count_admin_users()) instead of total user_count — /register can no longer block admin initialization 2. Dedicated error code: add SYSTEM_ALREADY_INITIALIZED to AuthErrorCode and use it in /initialize 409 responses; add to frontend types 3. Init token security: generate a one-time token at startup (logged to stdout) and require it in the /initialize request body — prevents an attacker from claiming admin on an exposed first-boot instance 4. Setup-status fetch timeout: apply SSR_AUTH_TIMEOUT_MS abort-controller pattern to the setup-status fetch in server.ts (same as /auth/me) Backend repo/provider: add count_admin_users() to base, SQLite, and LocalAuthProvider. Tests updated + new token-validation/register-blocking test cases added. Agent-Logs-Url: https://github.com/bytedance/deer-flow/sessions/b9f531fc-8ed3-41db-b416-237f243b45fd Co-authored-by: foreleven <4785594+foreleven@users.noreply.github.com> * fix: address code review nits — move secrets import, add INVALID_INIT_TOKEN error code, fix test assertions Agent-Logs-Url: https://github.com/bytedance/deer-flow/sessions/b9f531fc-8ed3-41db-b416-237f243b45fd Co-authored-by: foreleven <4785594+foreleven@users.noreply.github.com> * refactor: remove init_token generation and validation from admin setup flow * fix: re-apply init_token security for /initialize endpoint Re-adds the one-time init_token requirement to the /initialize endpoint, building on the human's UI improvements in 5eeeb09. This addresses the two remaining unresolved review threads: 1. Dedicated error code (SYSTEM_ALREADY_INITIALIZED + INVALID_INIT_TOKEN) 2. Init token security gate — requires the token logged at startup Changes: - errors.py: re-add INVALID_INIT_TOKEN error code - routers/auth.py: re-add `import secrets`, `init_token` field, token validation with secrets.compare_digest, and token consumption - app.py: re-add token generation/logging and app.state.init_token = None - setup/page.tsx: re-add initToken state + input field (human's UI kept) - types.ts: re-add invalid_init_token error code - test_initialize_admin.py: restore full token test coverage - test_ensure_admin.py: restore init_token assertions Agent-Logs-Url: https://github.com/bytedance/deer-flow/sessions/646fb5c0-ec09-41aa-9fe9-e6f7c32364e8 Co-authored-by: foreleven <4785594+foreleven@users.noreply.github.com> * fix: make init_token optional (403 not 422 on missing), don't consume token on error paths Agent-Logs-Url: https://github.com/bytedance/deer-flow/sessions/646fb5c0-ec09-41aa-9fe9-e6f7c32364e8 Co-authored-by: foreleven <4785594+foreleven@users.noreply.github.com> * refactor: remove redundant skill-related functions and documentation --------- Co-authored-by: rayhpeng <rayhpeng@gmail.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com> Co-authored-by: DanielWalnut <45447813+hetaoBackend@users.noreply.github.com> Co-authored-by: Willem Jiang <willem.jiang@gmail.com> Co-authored-by: JilongSun <965640067@qq.com> Co-authored-by: jie <49781832+stan-fu@users.noreply.github.com> Co-authored-by: cooper <cooperfu@tencent.com> Co-authored-by: yangzheli <43645580+yangzheli@users.noreply.github.com> Co-authored-by: greatmengqi <chenmengqi.0376@gmail.com> Co-authored-by: greatmengqi <chenmengqi.0376@bytedance.com> Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: foreleven <4785594+foreleven@users.noreply.github.com> Co-authored-by: jiangfeng.11 <jiangfeng.11@bytedance.com>
This commit is contained in:
parent
e75a2ff29a
commit
31e5b586a1
@ -2,7 +2,6 @@ import logging
|
||||
import os
|
||||
from collections.abc import AsyncGenerator
|
||||
from contextlib import asynccontextmanager
|
||||
from datetime import UTC
|
||||
|
||||
from fastapi import FastAPI
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
@ -41,77 +40,62 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
async def _ensure_admin_user(app: FastAPI) -> None:
|
||||
"""Auto-create the admin user on first boot if no users exist.
|
||||
"""Startup hook: generate init token on first boot; migrate orphan threads otherwise.
|
||||
|
||||
After admin creation, migrate orphan threads from the LangGraph
|
||||
store (metadata.owner_id unset) to the admin account. This is the
|
||||
"no-auth → with-auth" upgrade path: users who ran DeerFlow without
|
||||
authentication have existing LangGraph thread data that needs an
|
||||
owner assigned.
|
||||
First boot (no admin exists):
|
||||
- Generates a one-time ``init_token`` stored in ``app.state.init_token``
|
||||
- Logs the token to stdout so the operator can copy-paste it into the
|
||||
``/setup`` form to create the first admin account interactively.
|
||||
- Does NOT create any user accounts automatically.
|
||||
|
||||
Subsequent boots (admin already exists):
|
||||
- Runs the one-time "no-auth → with-auth" orphan thread migration for
|
||||
existing LangGraph thread metadata that has no owner_id.
|
||||
|
||||
No SQL persistence migration is needed: the four owner_id columns
|
||||
(threads_meta, runs, run_events, feedback) only come into existence
|
||||
alongside the auth module via create_all, so freshly created tables
|
||||
never contain NULL-owner rows. "Existing persistence DB + new auth"
|
||||
is not a supported upgrade path — fresh install or wipe-and-retry.
|
||||
|
||||
Multi-worker safe: relies on SQLite UNIQUE constraint to resolve
|
||||
races during admin creation. Only the worker that successfully
|
||||
creates/updates the admin prints the password; losers silently skip.
|
||||
never contain NULL-owner rows.
|
||||
"""
|
||||
import secrets
|
||||
|
||||
from app.gateway.auth.credential_file import write_initial_credentials
|
||||
from app.gateway.deps import get_local_provider
|
||||
from sqlalchemy import select
|
||||
|
||||
def _announce_credentials(email: str, password: str, *, label: str, headline: str) -> None:
|
||||
"""Write the password to a 0600 file and log the path (never the secret)."""
|
||||
cred_path = write_initial_credentials(email, password, label=label)
|
||||
logger.info("=" * 60)
|
||||
logger.info(" %s", headline)
|
||||
logger.info(" Credentials written to: %s (mode 0600)", cred_path)
|
||||
logger.info(" Change it after login: Settings -> Account")
|
||||
logger.info("=" * 60)
|
||||
from app.gateway.deps import get_local_provider
|
||||
from deerflow.persistence.engine import get_session_factory
|
||||
from deerflow.persistence.user.model import UserRow
|
||||
|
||||
provider = get_local_provider()
|
||||
user_count = await provider.count_users()
|
||||
admin_count = await provider.count_admin_users()
|
||||
|
||||
admin = None
|
||||
if admin_count == 0:
|
||||
init_token = secrets.token_urlsafe(32)
|
||||
app.state.init_token = init_token
|
||||
logger.info("=" * 60)
|
||||
logger.info(" First boot detected — no admin account exists.")
|
||||
logger.info(" Use the one-time token below to create the admin account.")
|
||||
logger.info(" Copy it into the /setup form when prompted.")
|
||||
logger.info(" INIT TOKEN: %s", init_token)
|
||||
logger.info(" Visit /setup to complete admin account creation.")
|
||||
logger.info("=" * 60)
|
||||
return
|
||||
|
||||
if user_count == 0:
|
||||
password = secrets.token_urlsafe(16)
|
||||
try:
|
||||
admin = await provider.create_user(email="admin@deerflow.dev", password=password, system_role="admin", needs_setup=True)
|
||||
except ValueError:
|
||||
return # Another worker already created the admin.
|
||||
_announce_credentials(admin.email, password, label="initial", headline="Admin account created on first boot")
|
||||
else:
|
||||
# Admin exists but setup never completed — reset password so operator
|
||||
# can always find it in the console without needing the CLI.
|
||||
# Multi-worker guard: if admin was created less than 30s ago, another
|
||||
# worker just created it and will print the password — skip reset.
|
||||
admin = await provider.get_user_by_email("admin@deerflow.dev")
|
||||
if admin and admin.needs_setup:
|
||||
import time
|
||||
# Admin already exists — run orphan thread migration for any
|
||||
# LangGraph thread metadata that pre-dates the auth module.
|
||||
sf = get_session_factory()
|
||||
if sf is None:
|
||||
return
|
||||
|
||||
age = time.time() - admin.created_at.replace(tzinfo=UTC).timestamp()
|
||||
if age >= 30:
|
||||
from app.gateway.auth.password import hash_password_async
|
||||
async with sf() as session:
|
||||
stmt = select(UserRow).where(UserRow.system_role == "admin").limit(1)
|
||||
row = (await session.execute(stmt)).scalar_one_or_none()
|
||||
|
||||
password = secrets.token_urlsafe(16)
|
||||
admin.password_hash = await hash_password_async(password)
|
||||
admin.token_version += 1
|
||||
await provider.update_user(admin)
|
||||
_announce_credentials(admin.email, password, label="reset", headline="Admin account setup incomplete — password reset")
|
||||
if row is None:
|
||||
return # Should not happen (admin_count > 0 above), but be safe.
|
||||
|
||||
if admin is None:
|
||||
return # Nothing to bind orphans to.
|
||||
|
||||
admin_id = str(admin.id)
|
||||
admin_id = str(row.id)
|
||||
|
||||
# LangGraph store orphan migration — non-fatal.
|
||||
# This covers the "no-auth → with-auth" upgrade path for users
|
||||
# whose existing LangGraph thread metadata has no owner_id set.
|
||||
store = getattr(app.state, "store", None)
|
||||
if store is not None:
|
||||
try:
|
||||
@ -374,6 +358,11 @@ This gateway provides custom endpoints for models, MCP configuration, skills, an
|
||||
"""
|
||||
return {"status": "healthy", "service": "deer-flow-gateway"}
|
||||
|
||||
# Ensure init_token always exists on app.state (None until lifespan sets it
|
||||
# if no admin is found). This prevents AttributeError in tests that don't
|
||||
# run the full lifespan.
|
||||
app.state.init_token = None
|
||||
|
||||
return app
|
||||
|
||||
|
||||
|
||||
@ -20,6 +20,8 @@ class AuthErrorCode(StrEnum):
|
||||
EMAIL_ALREADY_EXISTS = "email_already_exists"
|
||||
PROVIDER_NOT_FOUND = "provider_not_found"
|
||||
NOT_AUTHENTICATED = "not_authenticated"
|
||||
SYSTEM_ALREADY_INITIALIZED = "system_already_initialized"
|
||||
INVALID_INIT_TOKEN = "invalid_init_token"
|
||||
|
||||
|
||||
class TokenError(StrEnum):
|
||||
|
||||
@ -78,6 +78,10 @@ class LocalAuthProvider(AuthProvider):
|
||||
"""Return total number of registered users."""
|
||||
return await self._repo.count_users()
|
||||
|
||||
async def count_admin_users(self) -> int:
|
||||
"""Return number of admin users."""
|
||||
return await self._repo.count_admin_users()
|
||||
|
||||
async def update_user(self, user: User) -> User:
|
||||
"""Update an existing user."""
|
||||
return await self._repo.update_user(user)
|
||||
|
||||
@ -83,6 +83,11 @@ class UserRepository(ABC):
|
||||
"""Return total number of registered users."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
async def count_admin_users(self) -> int:
|
||||
"""Return number of users with system_role == 'admin'."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
async def get_user_by_oauth(self, provider: str, oauth_id: str) -> User | None:
|
||||
"""Get user by OAuth provider and ID.
|
||||
|
||||
@ -114,6 +114,11 @@ class SQLiteUserRepository(UserRepository):
|
||||
async with self._sf() as session:
|
||||
return await session.scalar(stmt) or 0
|
||||
|
||||
async def count_admin_users(self) -> int:
|
||||
stmt = select(func.count()).select_from(UserRow).where(UserRow.system_role == "admin")
|
||||
async with self._sf() as session:
|
||||
return await session.scalar(stmt) or 0
|
||||
|
||||
async def get_user_by_oauth(self, provider: str, oauth_id: str) -> User | None:
|
||||
stmt = select(UserRow).where(UserRow.oauth_provider == provider, UserRow.oauth_id == oauth_id)
|
||||
async with self._sf() as session:
|
||||
|
||||
@ -36,6 +36,7 @@ _PUBLIC_EXACT_PATHS: frozenset[str] = frozenset(
|
||||
"/api/v1/auth/register",
|
||||
"/api/v1/auth/logout",
|
||||
"/api/v1/auth/setup-status",
|
||||
"/api/v1/auth/initialize",
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@ -48,6 +48,7 @@ _AUTH_EXEMPT_PATHS: frozenset[str] = frozenset(
|
||||
"/api/v1/auth/login/local",
|
||||
"/api/v1/auth/logout",
|
||||
"/api/v1/auth/register",
|
||||
"/api/v1/auth/initialize",
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@ -2,6 +2,7 @@
|
||||
|
||||
import logging
|
||||
import os
|
||||
import secrets
|
||||
import time
|
||||
from ipaddress import ip_address, ip_network
|
||||
|
||||
@ -378,9 +379,74 @@ async def get_me(request: Request):
|
||||
|
||||
@router.get("/setup-status")
|
||||
async def setup_status():
|
||||
"""Check if admin account exists. Always False after first boot."""
|
||||
user_count = await get_local_provider().count_users()
|
||||
return {"needs_setup": user_count == 0}
|
||||
"""Check if an admin account exists. Returns needs_setup=True when no admin exists."""
|
||||
admin_count = await get_local_provider().count_admin_users()
|
||||
return {"needs_setup": admin_count == 0}
|
||||
|
||||
|
||||
class InitializeAdminRequest(BaseModel):
|
||||
"""Request model for first-boot admin account creation."""
|
||||
|
||||
email: EmailStr
|
||||
password: str = Field(..., min_length=8)
|
||||
init_token: str | None = Field(default=None, description="One-time initialization token printed to server logs on first boot")
|
||||
|
||||
_strong_password = field_validator("password")(classmethod(lambda cls, v: _validate_strong_password(v)))
|
||||
|
||||
|
||||
@router.post("/initialize", response_model=UserResponse, status_code=status.HTTP_201_CREATED)
|
||||
async def initialize_admin(request: Request, response: Response, body: InitializeAdminRequest):
|
||||
"""Create the first admin account on initial system setup.
|
||||
|
||||
Only callable when no admin exists. Returns 409 Conflict if an admin
|
||||
already exists. Requires the one-time ``init_token`` that is logged to
|
||||
stdout at startup whenever the system has no admin account.
|
||||
|
||||
On success the token is consumed (one-time use), the admin account is
|
||||
created with ``needs_setup=False``, and the session cookie is set.
|
||||
"""
|
||||
# Validate the one-time initialization token. The token is generated
|
||||
# at startup and stored in app.state.init_token; it is consumed here on
|
||||
# the first successful call so it cannot be replayed.
|
||||
# Using str | None allows a missing/null token to return 403 (not 422),
|
||||
# giving a consistent error response regardless of whether the token is
|
||||
# absent or incorrect.
|
||||
stored_token: str | None = getattr(request.app.state, "init_token", None)
|
||||
provided_token: str = body.init_token or ""
|
||||
if stored_token is None or not secrets.compare_digest(stored_token, provided_token):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail=AuthErrorResponse(code=AuthErrorCode.INVALID_INIT_TOKEN, message="Invalid or expired initialization token").model_dump(),
|
||||
)
|
||||
|
||||
admin_count = await get_local_provider().count_admin_users()
|
||||
if admin_count > 0:
|
||||
# Do NOT consume the token on this error path — consuming it here
|
||||
# would allow an attacker to exhaust the token by calling with the
|
||||
# correct token when admin already exists (denial-of-service).
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_409_CONFLICT,
|
||||
detail=AuthErrorResponse(code=AuthErrorCode.SYSTEM_ALREADY_INITIALIZED, message="System already initialized").model_dump(),
|
||||
)
|
||||
|
||||
try:
|
||||
user = await get_local_provider().create_user(email=body.email, password=body.password, system_role="admin", needs_setup=False)
|
||||
except ValueError:
|
||||
# DB unique-constraint race: another concurrent request beat us.
|
||||
# Do NOT consume the token here for the same reason as above.
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_409_CONFLICT,
|
||||
detail=AuthErrorResponse(code=AuthErrorCode.SYSTEM_ALREADY_INITIALIZED, message="System already initialized").model_dump(),
|
||||
)
|
||||
|
||||
# Consume the token only after successful initialization — this is the
|
||||
# single place where one-time use is enforced.
|
||||
request.app.state.init_token = None
|
||||
|
||||
token = create_access_token(str(user.id), token_version=user.token_version)
|
||||
_set_session_cookie(response, token, request)
|
||||
|
||||
return UserResponse(id=str(user.id), email=user.email, system_role=user.system_role)
|
||||
|
||||
|
||||
# ── OAuth Endpoints (Future/Placeholder) ─────────────────────────────────
|
||||
|
||||
@ -164,30 +164,6 @@ Skip simple one-off tasks.
|
||||
"""
|
||||
|
||||
|
||||
def _skill_mutability_label(category: str) -> str:
|
||||
return "[custom, editable]" if category == "custom" else "[built-in]"
|
||||
|
||||
|
||||
def clear_skills_system_prompt_cache() -> None:
|
||||
_get_cached_skills_prompt_section.cache_clear()
|
||||
|
||||
|
||||
def _build_skill_evolution_section(skill_evolution_enabled: bool) -> str:
|
||||
if not skill_evolution_enabled:
|
||||
return ""
|
||||
return """
|
||||
## Skill Self-Evolution
|
||||
After completing a task, consider creating or updating a skill when:
|
||||
- The task required 5+ tool calls to resolve
|
||||
- You overcame non-obvious errors or pitfalls
|
||||
- The user corrected your approach and the corrected version worked
|
||||
- You discovered a non-trivial, recurring workflow
|
||||
If you used a skill and encountered issues not covered by it, patch it immediately.
|
||||
Prefer patch over edit. Before creating a new skill, confirm with the user first.
|
||||
Skip simple one-off tasks.
|
||||
"""
|
||||
|
||||
|
||||
def _build_subagent_section(max_concurrent: int) -> str:
|
||||
"""Build the subagent system prompt section with dynamic concurrency limit.
|
||||
|
||||
|
||||
@ -85,6 +85,35 @@ async def run_agent(
|
||||
pre_run_snapshot: dict[str, Any] | None = None
|
||||
snapshot_capture_failed = False
|
||||
|
||||
# Initialize RunJournal for event capture
|
||||
journal = None
|
||||
if event_store is not None:
|
||||
from deerflow.runtime.journal import RunJournal
|
||||
|
||||
journal = RunJournal(
|
||||
run_id=run_id,
|
||||
thread_id=thread_id,
|
||||
event_store=event_store,
|
||||
track_token_usage=getattr(run_events_config, "track_token_usage", True),
|
||||
)
|
||||
|
||||
# Write human_message event (model_dump format, aligned with checkpoint)
|
||||
human_msg = _extract_human_message(graph_input)
|
||||
if human_msg is not None:
|
||||
msg_metadata = {}
|
||||
if follow_up_to_run_id:
|
||||
msg_metadata["follow_up_to_run_id"] = follow_up_to_run_id
|
||||
await event_store.put(
|
||||
thread_id=thread_id,
|
||||
run_id=run_id,
|
||||
event_type="human_message",
|
||||
category="message",
|
||||
content=human_msg.model_dump(),
|
||||
metadata=msg_metadata or None,
|
||||
)
|
||||
content = human_msg.content
|
||||
journal.set_first_human_message(content if isinstance(content, str) else str(content))
|
||||
|
||||
# Initialize RunJournal for event capture
|
||||
journal = None
|
||||
if event_store is not None:
|
||||
|
||||
@ -23,9 +23,7 @@ dependencies = [
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
postgres = [
|
||||
"deerflow-harness[postgres]",
|
||||
]
|
||||
postgres = ["deerflow-harness[postgres]"]
|
||||
|
||||
[dependency-groups]
|
||||
dev = ["pytest>=8.0.0", "ruff>=0.14.11"]
|
||||
|
||||
@ -1,21 +1,19 @@
|
||||
"""Tests for _ensure_admin_user() in app.py.
|
||||
|
||||
Covers: first-boot admin creation, auto-reset on needs_setup=True,
|
||||
no-op on needs_setup=False, migration, and edge cases.
|
||||
Covers: first-boot no-op (admin creation removed), orphan migration
|
||||
when admin exists, no-op on no admin found, and edge cases.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
from datetime import UTC, datetime, timedelta
|
||||
from types import SimpleNamespace
|
||||
from unittest.mock import AsyncMock, patch
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
os.environ.setdefault("AUTH_JWT_SECRET", "test-secret-key-ensure-admin-testing-min-32")
|
||||
|
||||
from app.gateway.auth.config import AuthConfig, set_auth_config
|
||||
from app.gateway.auth.models import User
|
||||
|
||||
_JWT_SECRET = "test-secret-key-ensure-admin-testing-min-32"
|
||||
|
||||
@ -35,53 +33,90 @@ def _make_app_stub(store=None):
|
||||
return app
|
||||
|
||||
|
||||
def _make_provider(user_count=0, admin_user=None):
|
||||
def _make_provider(admin_count=0):
|
||||
p = AsyncMock()
|
||||
p.count_users = AsyncMock(return_value=user_count)
|
||||
p.create_user = AsyncMock(
|
||||
side_effect=lambda **kw: User(
|
||||
email=kw["email"],
|
||||
password_hash="hashed",
|
||||
system_role=kw.get("system_role", "user"),
|
||||
needs_setup=kw.get("needs_setup", False),
|
||||
)
|
||||
)
|
||||
p.get_user_by_email = AsyncMock(return_value=admin_user)
|
||||
p.count_users = AsyncMock(return_value=admin_count)
|
||||
p.count_admin_users = AsyncMock(return_value=admin_count)
|
||||
p.create_user = AsyncMock()
|
||||
p.update_user = AsyncMock(side_effect=lambda u: u)
|
||||
return p
|
||||
|
||||
|
||||
# ── First boot: no users ─────────────────────────────────────────────────
|
||||
def _make_session_factory(admin_row=None):
|
||||
"""Build a mock async session factory that returns a row from execute()."""
|
||||
row_result = MagicMock()
|
||||
row_result.scalar_one_or_none.return_value = admin_row
|
||||
|
||||
execute_result = MagicMock()
|
||||
execute_result.scalar_one_or_none.return_value = admin_row
|
||||
|
||||
session = AsyncMock()
|
||||
session.execute = AsyncMock(return_value=execute_result)
|
||||
|
||||
# Async context manager
|
||||
session_cm = AsyncMock()
|
||||
session_cm.__aenter__ = AsyncMock(return_value=session)
|
||||
session_cm.__aexit__ = AsyncMock(return_value=False)
|
||||
|
||||
sf = MagicMock()
|
||||
sf.return_value = session_cm
|
||||
return sf
|
||||
|
||||
|
||||
def test_first_boot_creates_admin():
|
||||
"""count_users==0 → create admin with needs_setup=True."""
|
||||
provider = _make_provider(user_count=0)
|
||||
# ── First boot: no admin → generate init_token, return early ─────────────
|
||||
|
||||
|
||||
def test_first_boot_does_not_create_admin():
|
||||
"""admin_count==0 → generate init_token, do NOT create admin automatically."""
|
||||
provider = _make_provider(admin_count=0)
|
||||
app = _make_app_stub()
|
||||
app.state.init_token = None # lifespan sets this
|
||||
|
||||
with patch("app.gateway.deps.get_local_provider", return_value=provider):
|
||||
with patch("app.gateway.auth.password.hash_password_async", new_callable=AsyncMock, return_value="hashed"):
|
||||
from app.gateway.app import _ensure_admin_user
|
||||
from app.gateway.app import _ensure_admin_user
|
||||
|
||||
asyncio.run(_ensure_admin_user(app))
|
||||
asyncio.run(_ensure_admin_user(app))
|
||||
|
||||
provider.create_user.assert_called_once()
|
||||
call_kwargs = provider.create_user.call_args[1]
|
||||
assert call_kwargs["email"] == "admin@deerflow.dev"
|
||||
assert call_kwargs["system_role"] == "admin"
|
||||
assert call_kwargs["needs_setup"] is True
|
||||
assert len(call_kwargs["password"]) > 10 # random password generated
|
||||
provider.create_user.assert_not_called()
|
||||
# init_token must have been set on app.state
|
||||
assert app.state.init_token is not None
|
||||
assert len(app.state.init_token) > 10
|
||||
|
||||
|
||||
def test_first_boot_triggers_migration_if_store_present():
|
||||
"""First boot with store → _migrate_orphaned_threads called."""
|
||||
provider = _make_provider(user_count=0)
|
||||
def test_first_boot_skips_migration():
|
||||
"""No admin → return early before any migration attempt."""
|
||||
provider = _make_provider(admin_count=0)
|
||||
store = AsyncMock()
|
||||
store.asearch = AsyncMock(return_value=[])
|
||||
app = _make_app_stub(store=store)
|
||||
app.state.init_token = None # lifespan sets this
|
||||
|
||||
with patch("app.gateway.deps.get_local_provider", return_value=provider):
|
||||
from app.gateway.app import _ensure_admin_user
|
||||
|
||||
asyncio.run(_ensure_admin_user(app))
|
||||
|
||||
store.asearch.assert_not_called()
|
||||
|
||||
|
||||
# ── Admin exists: migration runs when admin row found ────────────────────
|
||||
|
||||
|
||||
def test_admin_exists_triggers_migration():
|
||||
"""Admin exists and admin row found → _migrate_orphaned_threads called."""
|
||||
from uuid import uuid4
|
||||
|
||||
admin_row = MagicMock()
|
||||
admin_row.id = uuid4()
|
||||
|
||||
provider = _make_provider(admin_count=1)
|
||||
sf = _make_session_factory(admin_row=admin_row)
|
||||
store = AsyncMock()
|
||||
store.asearch = AsyncMock(return_value=[])
|
||||
app = _make_app_stub(store=store)
|
||||
|
||||
with patch("app.gateway.deps.get_local_provider", return_value=provider):
|
||||
with patch("app.gateway.auth.password.hash_password_async", new_callable=AsyncMock, return_value="hashed"):
|
||||
with patch("deerflow.persistence.engine.get_session_factory", return_value=sf):
|
||||
from app.gateway.app import _ensure_admin_user
|
||||
|
||||
asyncio.run(_ensure_admin_user(app))
|
||||
@ -89,130 +124,77 @@ def test_first_boot_triggers_migration_if_store_present():
|
||||
store.asearch.assert_called_once()
|
||||
|
||||
|
||||
def test_first_boot_no_store_skips_migration():
|
||||
"""First boot without store → no crash, migration skipped."""
|
||||
provider = _make_provider(user_count=0)
|
||||
def test_admin_exists_no_admin_row_skips_migration():
|
||||
"""Admin count > 0 but DB row missing (edge case) → skip migration gracefully."""
|
||||
provider = _make_provider(admin_count=2)
|
||||
sf = _make_session_factory(admin_row=None)
|
||||
store = AsyncMock()
|
||||
app = _make_app_stub(store=store)
|
||||
|
||||
with patch("app.gateway.deps.get_local_provider", return_value=provider):
|
||||
with patch("deerflow.persistence.engine.get_session_factory", return_value=sf):
|
||||
from app.gateway.app import _ensure_admin_user
|
||||
|
||||
asyncio.run(_ensure_admin_user(app))
|
||||
|
||||
store.asearch.assert_not_called()
|
||||
|
||||
|
||||
def test_admin_exists_no_store_skips_migration():
|
||||
"""Admin exists, row found, but no store → no crash, no migration."""
|
||||
from uuid import uuid4
|
||||
|
||||
admin_row = MagicMock()
|
||||
admin_row.id = uuid4()
|
||||
|
||||
provider = _make_provider(admin_count=1)
|
||||
sf = _make_session_factory(admin_row=admin_row)
|
||||
app = _make_app_stub(store=None)
|
||||
|
||||
with patch("app.gateway.deps.get_local_provider", return_value=provider):
|
||||
with patch("app.gateway.auth.password.hash_password_async", new_callable=AsyncMock, return_value="hashed"):
|
||||
with patch("deerflow.persistence.engine.get_session_factory", return_value=sf):
|
||||
from app.gateway.app import _ensure_admin_user
|
||||
|
||||
asyncio.run(_ensure_admin_user(app))
|
||||
|
||||
provider.create_user.assert_called_once()
|
||||
# No assertion needed — just verify no crash
|
||||
|
||||
|
||||
# ── Subsequent boot: needs_setup=True → auto-reset ───────────────────────
|
||||
|
||||
|
||||
def test_needs_setup_true_resets_password():
|
||||
"""Existing admin with needs_setup=True → password reset + token_version bumped."""
|
||||
admin = User(
|
||||
email="admin@deerflow.dev",
|
||||
password_hash="old-hash",
|
||||
system_role="admin",
|
||||
needs_setup=True,
|
||||
token_version=0,
|
||||
created_at=datetime.now(UTC) - timedelta(seconds=30),
|
||||
)
|
||||
provider = _make_provider(user_count=1, admin_user=admin)
|
||||
app = _make_app_stub()
|
||||
def test_admin_exists_session_factory_none_skips_migration():
|
||||
"""get_session_factory() returns None → return early, no crash."""
|
||||
provider = _make_provider(admin_count=1)
|
||||
store = AsyncMock()
|
||||
app = _make_app_stub(store=store)
|
||||
|
||||
with patch("app.gateway.deps.get_local_provider", return_value=provider):
|
||||
with patch("app.gateway.auth.password.hash_password_async", new_callable=AsyncMock, return_value="new-hash"):
|
||||
with patch("deerflow.persistence.engine.get_session_factory", return_value=None):
|
||||
from app.gateway.app import _ensure_admin_user
|
||||
|
||||
asyncio.run(_ensure_admin_user(app))
|
||||
|
||||
# Password was reset
|
||||
provider.update_user.assert_called_once()
|
||||
updated = provider.update_user.call_args[0][0]
|
||||
assert updated.password_hash == "new-hash"
|
||||
assert updated.token_version == 1
|
||||
|
||||
|
||||
def test_needs_setup_true_consecutive_resets_increment_version():
|
||||
"""Two boots with needs_setup=True → token_version increments each time."""
|
||||
admin = User(
|
||||
email="admin@deerflow.dev",
|
||||
password_hash="hash",
|
||||
system_role="admin",
|
||||
needs_setup=True,
|
||||
token_version=3,
|
||||
created_at=datetime.now(UTC) - timedelta(seconds=30),
|
||||
)
|
||||
provider = _make_provider(user_count=1, admin_user=admin)
|
||||
app = _make_app_stub()
|
||||
|
||||
with patch("app.gateway.deps.get_local_provider", return_value=provider):
|
||||
with patch("app.gateway.auth.password.hash_password_async", new_callable=AsyncMock, return_value="new-hash"):
|
||||
from app.gateway.app import _ensure_admin_user
|
||||
|
||||
asyncio.run(_ensure_admin_user(app))
|
||||
|
||||
updated = provider.update_user.call_args[0][0]
|
||||
assert updated.token_version == 4
|
||||
|
||||
|
||||
# ── Subsequent boot: needs_setup=False → no-op ──────────────────────────
|
||||
|
||||
|
||||
def test_needs_setup_false_no_reset():
|
||||
"""Admin with needs_setup=False → no password reset, no update."""
|
||||
admin = User(
|
||||
email="admin@deerflow.dev",
|
||||
password_hash="stable-hash",
|
||||
system_role="admin",
|
||||
needs_setup=False,
|
||||
token_version=2,
|
||||
)
|
||||
provider = _make_provider(user_count=1, admin_user=admin)
|
||||
app = _make_app_stub()
|
||||
|
||||
with patch("app.gateway.deps.get_local_provider", return_value=provider):
|
||||
from app.gateway.app import _ensure_admin_user
|
||||
|
||||
asyncio.run(_ensure_admin_user(app))
|
||||
|
||||
provider.update_user.assert_not_called()
|
||||
assert admin.password_hash == "stable-hash"
|
||||
assert admin.token_version == 2
|
||||
|
||||
|
||||
# ── Edge cases ───────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_no_admin_email_found_no_crash():
|
||||
"""Users exist but no admin@deerflow.dev → no crash, no reset."""
|
||||
provider = _make_provider(user_count=3, admin_user=None)
|
||||
app = _make_app_stub()
|
||||
|
||||
with patch("app.gateway.deps.get_local_provider", return_value=provider):
|
||||
from app.gateway.app import _ensure_admin_user
|
||||
|
||||
asyncio.run(_ensure_admin_user(app))
|
||||
|
||||
provider.update_user.assert_not_called()
|
||||
provider.create_user.assert_not_called()
|
||||
store.asearch.assert_not_called()
|
||||
|
||||
|
||||
def test_migration_failure_is_non_fatal():
|
||||
"""_migrate_orphaned_threads exception is caught and logged."""
|
||||
provider = _make_provider(user_count=0)
|
||||
from uuid import uuid4
|
||||
|
||||
admin_row = MagicMock()
|
||||
admin_row.id = uuid4()
|
||||
|
||||
provider = _make_provider(admin_count=1)
|
||||
sf = _make_session_factory(admin_row=admin_row)
|
||||
store = AsyncMock()
|
||||
store.asearch = AsyncMock(side_effect=RuntimeError("store crashed"))
|
||||
app = _make_app_stub(store=store)
|
||||
|
||||
with patch("app.gateway.deps.get_local_provider", return_value=provider):
|
||||
with patch("app.gateway.auth.password.hash_password_async", new_callable=AsyncMock, return_value="hashed"):
|
||||
with patch("deerflow.persistence.engine.get_session_factory", return_value=sf):
|
||||
from app.gateway.app import _ensure_admin_user
|
||||
|
||||
# Should not raise
|
||||
asyncio.run(_ensure_admin_user(app))
|
||||
|
||||
provider.create_user.assert_called_once()
|
||||
|
||||
|
||||
# ── Section 5.1-5.6 upgrade path: orphan thread migration ────────────────
|
||||
|
||||
|
||||
229
backend/tests/test_initialize_admin.py
Normal file
229
backend/tests/test_initialize_admin.py
Normal file
@ -0,0 +1,229 @@
|
||||
"""Tests for the POST /api/v1/auth/initialize endpoint.
|
||||
|
||||
Covers: first-boot admin creation, rejection when system already
|
||||
initialized, invalid/missing init_token, password strength validation,
|
||||
and public accessibility (no auth cookie required).
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
|
||||
import pytest
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
os.environ.setdefault("AUTH_JWT_SECRET", "test-secret-key-initialize-admin-min-32")
|
||||
|
||||
from app.gateway.auth.config import AuthConfig, set_auth_config
|
||||
|
||||
_TEST_SECRET = "test-secret-key-initialize-admin-min-32"
|
||||
_INIT_TOKEN = "test-init-token-for-initialization-tests"
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup_auth(tmp_path):
|
||||
"""Fresh SQLite engine + auth config per test."""
|
||||
from app.gateway import deps
|
||||
from deerflow.persistence.engine import close_engine, init_engine
|
||||
|
||||
set_auth_config(AuthConfig(jwt_secret=_TEST_SECRET))
|
||||
url = f"sqlite+aiosqlite:///{tmp_path}/init_admin.db"
|
||||
asyncio.run(init_engine("sqlite", url=url, sqlite_dir=str(tmp_path)))
|
||||
deps._cached_local_provider = None
|
||||
deps._cached_repo = None
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
deps._cached_local_provider = None
|
||||
deps._cached_repo = None
|
||||
asyncio.run(close_engine())
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def client(_setup_auth):
|
||||
from app.gateway.app import create_app
|
||||
from app.gateway.auth.config import AuthConfig, set_auth_config
|
||||
|
||||
set_auth_config(AuthConfig(jwt_secret=_TEST_SECRET))
|
||||
app = create_app()
|
||||
# Pre-set the init token on app.state (normally done by the lifespan on
|
||||
# first boot; tests don't run the lifespan because it requires config.yaml).
|
||||
app.state.init_token = _INIT_TOKEN
|
||||
# Do NOT use TestClient as a context manager — that would trigger the
|
||||
# full lifespan which requires config.yaml. The auth endpoints work
|
||||
# without the lifespan (persistence engine is set up by _setup_auth).
|
||||
yield TestClient(app)
|
||||
|
||||
|
||||
def _init_payload(**extra):
|
||||
"""Build a valid /initialize payload with the test init_token."""
|
||||
return {
|
||||
"email": "admin@example.com",
|
||||
"password": "Str0ng!Pass99",
|
||||
"init_token": _INIT_TOKEN,
|
||||
**extra,
|
||||
}
|
||||
|
||||
|
||||
# ── Happy path ────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_initialize_creates_admin_and_sets_cookie(client):
|
||||
"""POST /initialize when no admin exists → 201, session cookie set."""
|
||||
resp = client.post("/api/v1/auth/initialize", json=_init_payload())
|
||||
assert resp.status_code == 201
|
||||
data = resp.json()
|
||||
assert data["email"] == "admin@example.com"
|
||||
assert data["system_role"] == "admin"
|
||||
assert "access_token" in resp.cookies
|
||||
|
||||
|
||||
def test_initialize_needs_setup_false(client):
|
||||
"""Newly created admin via /initialize has needs_setup=False."""
|
||||
client.post("/api/v1/auth/initialize", json=_init_payload())
|
||||
me = client.get("/api/v1/auth/me")
|
||||
assert me.status_code == 200
|
||||
assert me.json()["needs_setup"] is False
|
||||
|
||||
|
||||
# ── Token validation ──────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_initialize_rejects_wrong_token(client):
|
||||
"""Wrong init_token → 403 invalid_init_token."""
|
||||
resp = client.post(
|
||||
"/api/v1/auth/initialize",
|
||||
json={**_init_payload(), "init_token": "wrong-token"},
|
||||
)
|
||||
assert resp.status_code == 403
|
||||
assert resp.json()["detail"]["code"] == "invalid_init_token"
|
||||
|
||||
|
||||
def test_initialize_rejects_empty_token(client):
|
||||
"""Empty init_token → 403 (fails constant-time comparison against stored token)."""
|
||||
resp = client.post(
|
||||
"/api/v1/auth/initialize",
|
||||
json={**_init_payload(), "init_token": ""},
|
||||
)
|
||||
assert resp.status_code == 403
|
||||
|
||||
|
||||
def test_initialize_token_consumed_after_success(client):
|
||||
"""After a successful /initialize the token is consumed and cannot be reused."""
|
||||
client.post("/api/v1/auth/initialize", json=_init_payload())
|
||||
# The token is now None; any subsequent call with the old token must be rejected (403)
|
||||
resp2 = client.post(
|
||||
"/api/v1/auth/initialize",
|
||||
json={**_init_payload(), "email": "other@example.com"},
|
||||
)
|
||||
assert resp2.status_code == 403
|
||||
|
||||
|
||||
# ── Rejection when already initialized ───────────────────────────────────
|
||||
|
||||
|
||||
def test_initialize_rejected_when_admin_exists(client):
|
||||
"""Second call to /initialize after admin exists → 409 system_already_initialized.
|
||||
|
||||
The first call consumes the token. Re-setting it on app.state simulates
|
||||
what would happen if the operator somehow restarted or manually refreshed
|
||||
the token (e.g., in testing).
|
||||
"""
|
||||
client.post("/api/v1/auth/initialize", json=_init_payload())
|
||||
# Re-set the token so the second attempt can pass token validation
|
||||
# and reach the admin-exists check.
|
||||
client.app.state.init_token = _INIT_TOKEN
|
||||
resp2 = client.post(
|
||||
"/api/v1/auth/initialize",
|
||||
json={**_init_payload(), "email": "other@example.com"},
|
||||
)
|
||||
assert resp2.status_code == 409
|
||||
body = resp2.json()
|
||||
assert body["detail"]["code"] == "system_already_initialized"
|
||||
|
||||
|
||||
def test_initialize_token_not_consumed_on_admin_exists(client):
|
||||
"""Token is NOT consumed when the admin-exists guard rejects the request.
|
||||
|
||||
This prevents a DoS where an attacker calls with the correct token when
|
||||
admin already exists and permanently burns the init token.
|
||||
"""
|
||||
client.post("/api/v1/auth/initialize", json=_init_payload())
|
||||
# Token consumed by success above; re-simulate the scenario:
|
||||
# admin exists, token is still valid (re-set), call should 409 and NOT consume token.
|
||||
client.app.state.init_token = _INIT_TOKEN
|
||||
client.post(
|
||||
"/api/v1/auth/initialize",
|
||||
json={**_init_payload(), "email": "other@example.com"},
|
||||
)
|
||||
# Token must still be set (not consumed) after the 409 rejection.
|
||||
assert client.app.state.init_token == _INIT_TOKEN
|
||||
|
||||
|
||||
def test_initialize_register_does_not_block_initialization(client):
|
||||
"""/register creating a user before /initialize doesn't block admin creation."""
|
||||
# Register a regular user first
|
||||
client.post("/api/v1/auth/register", json={"email": "regular@example.com", "password": "Tr0ub4dor3a"})
|
||||
# /initialize should still succeed (checks admin_count, not total user_count)
|
||||
resp = client.post("/api/v1/auth/initialize", json=_init_payload())
|
||||
assert resp.status_code == 201
|
||||
assert resp.json()["system_role"] == "admin"
|
||||
|
||||
|
||||
# ── Endpoint is public (no cookie required) ───────────────────────────────
|
||||
|
||||
|
||||
def test_initialize_accessible_without_cookie(client):
|
||||
"""No access_token cookie needed for /initialize."""
|
||||
resp = client.post(
|
||||
"/api/v1/auth/initialize",
|
||||
json=_init_payload(),
|
||||
cookies={},
|
||||
)
|
||||
assert resp.status_code == 201
|
||||
|
||||
|
||||
# ── Password validation ───────────────────────────────────────────────────
|
||||
|
||||
|
||||
def test_initialize_rejects_short_password(client):
|
||||
"""Password shorter than 8 chars → 422."""
|
||||
resp = client.post(
|
||||
"/api/v1/auth/initialize",
|
||||
json={**_init_payload(), "password": "short"},
|
||||
)
|
||||
assert resp.status_code == 422
|
||||
|
||||
|
||||
def test_initialize_rejects_common_password(client):
|
||||
"""Common password → 422."""
|
||||
resp = client.post(
|
||||
"/api/v1/auth/initialize",
|
||||
json={**_init_payload(), "password": "password123"},
|
||||
)
|
||||
assert resp.status_code == 422
|
||||
|
||||
|
||||
# ── setup-status reflects initialization ─────────────────────────────────
|
||||
|
||||
|
||||
def test_setup_status_before_initialization(client):
|
||||
"""setup-status returns needs_setup=True before /initialize is called."""
|
||||
resp = client.get("/api/v1/auth/setup-status")
|
||||
assert resp.status_code == 200
|
||||
assert resp.json()["needs_setup"] is True
|
||||
|
||||
|
||||
def test_setup_status_after_initialization(client):
|
||||
"""setup-status returns needs_setup=False after /initialize succeeds."""
|
||||
client.post("/api/v1/auth/initialize", json=_init_payload())
|
||||
resp = client.get("/api/v1/auth/setup-status")
|
||||
assert resp.status_code == 200
|
||||
assert resp.json()["needs_setup"] is False
|
||||
|
||||
|
||||
def test_setup_status_false_when_only_regular_user_exists(client):
|
||||
"""setup-status returns needs_setup=True even when regular users exist (no admin)."""
|
||||
client.post("/api/v1/auth/register", json={"email": "regular@example.com", "password": "Tr0ub4dor3a"})
|
||||
resp = client.get("/api/v1/auth/setup-status")
|
||||
assert resp.status_code == 200
|
||||
assert resp.json()["needs_setup"] is True
|
||||
@ -21,6 +21,7 @@ export default async function AuthLayout({
|
||||
case "needs_setup":
|
||||
// Allow access to setup page
|
||||
return <AuthProvider initialUser={result.user}>{children}</AuthProvider>;
|
||||
case "system_setup_required":
|
||||
case "unauthenticated":
|
||||
return <AuthProvider initialUser={null}>{children}</AuthProvider>;
|
||||
case "gateway_unavailable":
|
||||
|
||||
@ -2,9 +2,11 @@
|
||||
|
||||
import Link from "next/link";
|
||||
import { useRouter, useSearchParams } from "next/navigation";
|
||||
import { useTheme } from "next-themes";
|
||||
import { useEffect, useState } from "react";
|
||||
|
||||
import { Button } from "@/components/ui/button";
|
||||
import { FlickeringGrid } from "@/components/ui/flickering-grid";
|
||||
import { Input } from "@/components/ui/input";
|
||||
import { useAuth } from "@/core/auth/AuthProvider";
|
||||
import { parseAuthError } from "@/core/auth/types";
|
||||
@ -46,6 +48,7 @@ export default function LoginPage() {
|
||||
const router = useRouter();
|
||||
const searchParams = useSearchParams();
|
||||
const { isAuthenticated } = useAuth();
|
||||
const { theme, resolvedTheme } = useTheme();
|
||||
|
||||
const [email, setEmail] = useState("");
|
||||
const [password, setPassword] = useState("");
|
||||
@ -64,6 +67,26 @@ export default function LoginPage() {
|
||||
}
|
||||
}, [isAuthenticated, redirectPath, router]);
|
||||
|
||||
// Redirect to setup if the system has no users yet
|
||||
useEffect(() => {
|
||||
let cancelled = false;
|
||||
|
||||
void fetch("/api/v1/auth/setup-status")
|
||||
.then((r) => r.json())
|
||||
.then((data: { needs_setup?: boolean }) => {
|
||||
if (!cancelled && data.needs_setup) {
|
||||
router.push("/setup");
|
||||
}
|
||||
})
|
||||
.catch(() => {
|
||||
// Ignore errors; user stays on login page
|
||||
});
|
||||
|
||||
return () => {
|
||||
cancelled = true;
|
||||
};
|
||||
}, [router]);
|
||||
|
||||
const handleSubmit = async (e: React.FormEvent) => {
|
||||
e.preventDefault();
|
||||
setError("");
|
||||
@ -97,25 +120,35 @@ export default function LoginPage() {
|
||||
|
||||
// Both login and register set a cookie — redirect to workspace
|
||||
router.push(redirectPath);
|
||||
} catch (_err) {
|
||||
} catch {
|
||||
setError("Network error. Please try again.");
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
const actualTheme = theme === "system" ? resolvedTheme : theme;
|
||||
|
||||
return (
|
||||
<div className="flex min-h-screen items-center justify-center bg-[#0a0a0a]">
|
||||
<div className="border-border/20 w-full max-w-md space-y-6 rounded-lg border bg-black/50 p-8 backdrop-blur-sm">
|
||||
<div className="bg-background flex min-h-screen items-center justify-center">
|
||||
<FlickeringGrid
|
||||
className="absolute inset-0 z-0 mask-[url(/images/deer.svg)] mask-size-[100vw] mask-center mask-no-repeat md:mask-size-[72vh]"
|
||||
squareSize={4}
|
||||
gridGap={4}
|
||||
color={actualTheme === "dark" ? "white" : "black"}
|
||||
maxOpacity={0.3}
|
||||
flickerChance={0.25}
|
||||
/>
|
||||
<div className="border-border/20 bg-background/5 w-full max-w-md space-y-6 rounded-3xl border p-8 backdrop-blur-sm">
|
||||
<div className="text-center">
|
||||
<h1 className="font-serif text-3xl">DeerFlow</h1>
|
||||
<h1 className="text-foreground font-serif text-3xl">DeerFlow</h1>
|
||||
<p className="text-muted-foreground mt-2">
|
||||
{isLogin ? "Sign in to your account" : "Create a new account"}
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<form onSubmit={handleSubmit} className="space-y-4">
|
||||
<div>
|
||||
<form onSubmit={handleSubmit} className="space-y-2">
|
||||
<div className="flex flex-col space-y-1">
|
||||
<label htmlFor="email" className="text-sm font-medium">
|
||||
Email
|
||||
</label>
|
||||
@ -126,11 +159,9 @@ export default function LoginPage() {
|
||||
onChange={(e) => setEmail(e.target.value)}
|
||||
placeholder="you@example.com"
|
||||
required
|
||||
className="mt-1 bg-white text-black"
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<div className="flex flex-col space-y-1">
|
||||
<label htmlFor="password" className="text-sm font-medium">
|
||||
Password
|
||||
</label>
|
||||
@ -142,7 +173,6 @@ export default function LoginPage() {
|
||||
placeholder="•••••••"
|
||||
required
|
||||
minLength={isLogin ? 6 : 8}
|
||||
className="mt-1 bg-white text-black"
|
||||
/>
|
||||
</div>
|
||||
|
||||
|
||||
@ -1,23 +1,108 @@
|
||||
"use client";
|
||||
|
||||
import { useRouter } from "next/navigation";
|
||||
import { useState } from "react";
|
||||
import { useTheme } from "next-themes";
|
||||
import { useEffect, useState } from "react";
|
||||
|
||||
import { Button } from "@/components/ui/button";
|
||||
import { FlickeringGrid } from "@/components/ui/flickering-grid";
|
||||
import { Input } from "@/components/ui/input";
|
||||
import { getCsrfHeaders } from "@/core/api/fetcher";
|
||||
import { useAuth } from "@/core/auth/AuthProvider";
|
||||
import { parseAuthError } from "@/core/auth/types";
|
||||
|
||||
type SetupMode = "loading" | "init_admin" | "change_password";
|
||||
|
||||
export default function SetupPage() {
|
||||
const router = useRouter();
|
||||
const { user, isAuthenticated } = useAuth();
|
||||
const { theme, resolvedTheme } = useTheme();
|
||||
const [mode, setMode] = useState<SetupMode>("loading");
|
||||
|
||||
// --- Shared state ---
|
||||
const [email, setEmail] = useState("");
|
||||
const [newPassword, setNewPassword] = useState("");
|
||||
const [confirmPassword, setConfirmPassword] = useState("");
|
||||
const [currentPassword, setCurrentPassword] = useState("");
|
||||
const [error, setError] = useState("");
|
||||
const [loading, setLoading] = useState(false);
|
||||
|
||||
const handleSetup = async (e: React.FormEvent) => {
|
||||
// --- Init-admin mode only ---
|
||||
const [initToken, setInitToken] = useState("");
|
||||
|
||||
// --- Change-password mode only ---
|
||||
const [currentPassword, setCurrentPassword] = useState("");
|
||||
|
||||
useEffect(() => {
|
||||
let cancelled = false;
|
||||
|
||||
if (isAuthenticated && user?.needs_setup) {
|
||||
setMode("change_password");
|
||||
} else if (!isAuthenticated) {
|
||||
// Check if the system has no users yet
|
||||
void fetch("/api/v1/auth/setup-status")
|
||||
.then((r) => r.json())
|
||||
.then((data: { needs_setup?: boolean }) => {
|
||||
if (cancelled) return;
|
||||
if (data.needs_setup) {
|
||||
setMode("init_admin");
|
||||
} else {
|
||||
// System already set up and user is not logged in — go to login
|
||||
router.push("/login");
|
||||
}
|
||||
})
|
||||
.catch(() => {
|
||||
if (!cancelled) router.push("/login");
|
||||
});
|
||||
} else {
|
||||
// Authenticated but needs_setup is false — already set up
|
||||
router.push("/workspace");
|
||||
}
|
||||
|
||||
return () => {
|
||||
cancelled = true;
|
||||
};
|
||||
}, [isAuthenticated, user, router]);
|
||||
|
||||
// ── Init-admin handler ─────────────────────────────────────────────
|
||||
const handleInitAdmin = async (e: React.SubmitEvent) => {
|
||||
e.preventDefault();
|
||||
setError("");
|
||||
|
||||
if (newPassword !== confirmPassword) {
|
||||
setError("Passwords do not match");
|
||||
return;
|
||||
}
|
||||
|
||||
setLoading(true);
|
||||
try {
|
||||
const res = await fetch("/api/v1/auth/initialize", {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
credentials: "include",
|
||||
body: JSON.stringify({
|
||||
email,
|
||||
password: newPassword,
|
||||
init_token: initToken,
|
||||
}),
|
||||
});
|
||||
|
||||
if (!res.ok) {
|
||||
const data = await res.json();
|
||||
const authError = parseAuthError(data);
|
||||
setError(authError.message);
|
||||
return;
|
||||
}
|
||||
|
||||
router.push("/workspace");
|
||||
} catch {
|
||||
setError("Network error. Please try again.");
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
// ── Change-password handler ────────────────────────────────────────
|
||||
const handleChangePassword = async (e: React.SubmitEvent) => {
|
||||
e.preventDefault();
|
||||
setError("");
|
||||
|
||||
@ -61,9 +146,117 @@ export default function SetupPage() {
|
||||
}
|
||||
};
|
||||
|
||||
const actualTheme = theme === "system" ? resolvedTheme : theme;
|
||||
|
||||
if (mode === "loading") {
|
||||
return (
|
||||
<div className="flex min-h-screen items-center justify-center">
|
||||
<p className="text-muted-foreground text-sm">Loading…</p>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
// ── Admin initialization form ──────────────────────────────────────
|
||||
if (mode === "init_admin") {
|
||||
return (
|
||||
<div className="bg-background flex min-h-screen items-center justify-center">
|
||||
<FlickeringGrid
|
||||
className="absolute inset-0 z-0 mask-[url(/images/deer.svg)] mask-size-[100vw] mask-center mask-no-repeat md:mask-size-[72vh]"
|
||||
squareSize={4}
|
||||
gridGap={4}
|
||||
color={actualTheme === "dark" ? "white" : "black"}
|
||||
maxOpacity={0.3}
|
||||
flickerChance={0.25}
|
||||
/>
|
||||
<div className="border-border/20 bg-background/5 w-full max-w-md space-y-6 rounded-3xl border p-8 backdrop-blur-sm">
|
||||
<div className="text-center">
|
||||
<h1 className="font-serif text-3xl">DeerFlow</h1>
|
||||
<p className="text-muted-foreground mt-2">Create admin account</p>
|
||||
<p className="text-muted-foreground mt-1 text-xs">
|
||||
Set up the administrator account to get started.
|
||||
</p>
|
||||
</div>
|
||||
<form onSubmit={handleInitAdmin} className="space-y-2">
|
||||
<div className="flex flex-col space-y-1">
|
||||
<label htmlFor="email" className="text-sm font-medium">
|
||||
Email
|
||||
</label>
|
||||
<Input
|
||||
id="email"
|
||||
type="email"
|
||||
placeholder="you@example.com"
|
||||
value={email}
|
||||
onChange={(e) => setEmail(e.target.value)}
|
||||
required
|
||||
/>
|
||||
</div>
|
||||
<div className="flex flex-col space-y-1">
|
||||
<label htmlFor="initToken" className="text-sm font-medium">
|
||||
Initialization Token
|
||||
</label>
|
||||
<Input
|
||||
id="initToken"
|
||||
type="text"
|
||||
placeholder="Copy from server startup logs"
|
||||
value={initToken}
|
||||
onChange={(e) => setInitToken(e.target.value)}
|
||||
required
|
||||
autoComplete="off"
|
||||
/>
|
||||
<p className="text-muted-foreground text-xs">
|
||||
Find the <code>INIT TOKEN</code> printed in the server startup logs.
|
||||
</p>
|
||||
</div>
|
||||
<div className="flex flex-col space-y-1">
|
||||
<label htmlFor="password" className="text-sm font-medium">
|
||||
Password
|
||||
</label>
|
||||
<Input
|
||||
id="password"
|
||||
type="password"
|
||||
placeholder="Password (min. 8 characters)"
|
||||
value={newPassword}
|
||||
onChange={(e) => setNewPassword(e.target.value)}
|
||||
required
|
||||
minLength={8}
|
||||
/>
|
||||
</div>
|
||||
<div className="flex flex-col space-y-1">
|
||||
<label htmlFor="confirmPassword" className="text-sm font-medium">
|
||||
Confirm Password
|
||||
</label>
|
||||
<Input
|
||||
id="confirmPassword"
|
||||
type="password"
|
||||
placeholder="Confirm password"
|
||||
value={confirmPassword}
|
||||
onChange={(e) => setConfirmPassword(e.target.value)}
|
||||
required
|
||||
minLength={8}
|
||||
/>
|
||||
</div>
|
||||
{error && <p className="ms-1 text-sm text-red-500">{error}</p>}
|
||||
<Button type="submit" className="w-full" disabled={loading}>
|
||||
{loading ? "Creating account…" : "Create Admin Account"}
|
||||
</Button>
|
||||
</form>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
// ── Change-password form (needs_setup after login) ─────────────────
|
||||
return (
|
||||
<div className="flex min-h-screen items-center justify-center">
|
||||
<div className="w-full max-w-sm space-y-6 p-6">
|
||||
<div className="bg-background flex min-h-screen items-center justify-center">
|
||||
<FlickeringGrid
|
||||
className="absolute inset-0 z-0 mask-[url(/images/deer.svg)] mask-size-[100vw] mask-center mask-no-repeat md:mask-size-[72vh]"
|
||||
squareSize={4}
|
||||
gridGap={4}
|
||||
color={actualTheme === "dark" ? "white" : "black"}
|
||||
maxOpacity={0.3}
|
||||
flickerChance={0.25}
|
||||
/>
|
||||
<div className="border-border/20 bg-background/5 w-full max-w-md space-y-6 rounded-3xl border p-8 backdrop-blur-sm">
|
||||
<div className="text-center">
|
||||
<h1 className="font-serif text-3xl">DeerFlow</h1>
|
||||
<p className="text-muted-foreground mt-2">
|
||||
@ -73,7 +266,7 @@ export default function SetupPage() {
|
||||
Set your real email and a new password.
|
||||
</p>
|
||||
</div>
|
||||
<form onSubmit={handleSetup} className="space-y-4">
|
||||
<form onSubmit={handleChangePassword} className="space-y-4">
|
||||
<Input
|
||||
type="email"
|
||||
placeholder="Your email"
|
||||
@ -83,7 +276,7 @@ export default function SetupPage() {
|
||||
/>
|
||||
<Input
|
||||
type="password"
|
||||
placeholder="Current password (from console log)"
|
||||
placeholder="Current password"
|
||||
value={currentPassword}
|
||||
onChange={(e) => setCurrentPassword(e.target.value)}
|
||||
required
|
||||
@ -106,7 +299,7 @@ export default function SetupPage() {
|
||||
/>
|
||||
{error && <p className="text-sm text-red-500">{error}</p>}
|
||||
<Button type="submit" className="w-full" disabled={loading}>
|
||||
{loading ? "Setting up..." : "Complete Setup"}
|
||||
{loading ? "Setting up…" : "Complete Setup"}
|
||||
</Button>
|
||||
</form>
|
||||
</div>
|
||||
|
||||
@ -23,6 +23,8 @@ export default async function WorkspaceLayout({
|
||||
);
|
||||
case "needs_setup":
|
||||
redirect("/setup");
|
||||
case "system_setup_required":
|
||||
redirect("/setup");
|
||||
case "unauthenticated":
|
||||
redirect("/login");
|
||||
case "gateway_unavailable":
|
||||
|
||||
@ -69,12 +69,10 @@ export function AccountSettingsPage() {
|
||||
return (
|
||||
<div className="space-y-8">
|
||||
<SettingsSection title="Profile">
|
||||
<div className="space-y-3">
|
||||
<div className="flex items-center justify-between">
|
||||
<div className="space-y-2">
|
||||
<div className="grid grid-cols-[max-content_max-content] items-center gap-4">
|
||||
<span className="text-muted-foreground text-sm">Email</span>
|
||||
<span className="text-sm font-medium">{user?.email ?? "—"}</span>
|
||||
</div>
|
||||
<div className="flex items-center justify-between">
|
||||
<span className="text-muted-foreground text-sm">Role</span>
|
||||
<span className="text-sm font-medium capitalize">
|
||||
{user?.system_role ?? "—"}
|
||||
@ -83,7 +81,10 @@ export function AccountSettingsPage() {
|
||||
</div>
|
||||
</SettingsSection>
|
||||
|
||||
<SettingsSection title="Change Password">
|
||||
<SettingsSection
|
||||
title="Change Password"
|
||||
description="Update your account password."
|
||||
>
|
||||
<form onSubmit={handleChangePassword} className="max-w-sm space-y-3">
|
||||
<Input
|
||||
type="password"
|
||||
@ -116,7 +117,7 @@ export function AccountSettingsPage() {
|
||||
</form>
|
||||
</SettingsSection>
|
||||
|
||||
<SettingsSection title="Session">
|
||||
<SettingsSection title="" description="">
|
||||
<Button
|
||||
variant="destructive"
|
||||
size="sm"
|
||||
|
||||
@ -20,7 +20,34 @@ export async function getServerSideUser(): Promise<AuthResult> {
|
||||
return { tag: "config_error", message: String(err) };
|
||||
}
|
||||
|
||||
if (!sessionCookie) return { tag: "unauthenticated" };
|
||||
if (!sessionCookie) {
|
||||
// No session — check whether the system has been initialised yet.
|
||||
const setupController = new AbortController();
|
||||
const setupTimeout = setTimeout(
|
||||
() => setupController.abort(),
|
||||
SSR_AUTH_TIMEOUT_MS,
|
||||
);
|
||||
try {
|
||||
const setupRes = await fetch(
|
||||
`${internalGatewayUrl}/api/v1/auth/setup-status`,
|
||||
{
|
||||
cache: "no-store",
|
||||
signal: setupController.signal,
|
||||
},
|
||||
);
|
||||
clearTimeout(setupTimeout);
|
||||
if (setupRes.ok) {
|
||||
const setupData = (await setupRes.json()) as { needs_setup?: boolean };
|
||||
if (setupData.needs_setup) {
|
||||
return { tag: "system_setup_required" };
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
clearTimeout(setupTimeout);
|
||||
// If setup-status is unreachable/times out, fall through to unauthenticated.
|
||||
}
|
||||
return { tag: "unauthenticated" };
|
||||
}
|
||||
|
||||
const controller = new AbortController();
|
||||
const timeout = setTimeout(() => controller.abort(), SSR_AUTH_TIMEOUT_MS);
|
||||
|
||||
@ -16,6 +16,7 @@ export type User = z.infer<typeof userSchema>;
|
||||
export type AuthResult =
|
||||
| { tag: "authenticated"; user: User }
|
||||
| { tag: "needs_setup"; user: User }
|
||||
| { tag: "system_setup_required" }
|
||||
| { tag: "unauthenticated" }
|
||||
| { tag: "gateway_unavailable" }
|
||||
| { tag: "config_error"; message: string };
|
||||
@ -38,6 +39,8 @@ const AUTH_ERROR_CODES = [
|
||||
"email_already_exists",
|
||||
"provider_not_found",
|
||||
"not_authenticated",
|
||||
"system_already_initialized",
|
||||
"invalid_init_token",
|
||||
] as const;
|
||||
|
||||
export type AuthErrorCode = (typeof AUTH_ERROR_CODES)[number];
|
||||
@ -47,24 +50,44 @@ export interface AuthErrorResponse {
|
||||
message: string;
|
||||
}
|
||||
|
||||
const authErrorSchema = z.object({
|
||||
const AuthErrorSchema = z.object({
|
||||
code: z.enum(AUTH_ERROR_CODES),
|
||||
message: z.string(),
|
||||
});
|
||||
|
||||
const ErrorDetailSchema = z.object({
|
||||
msg: z.string(),
|
||||
type: z.enum(["value_error"]),
|
||||
loc: z.array(z.string()),
|
||||
});
|
||||
|
||||
export function parseAuthError(data: unknown): AuthErrorResponse {
|
||||
// Try top-level {code, message} first
|
||||
const parsed = authErrorSchema.safeParse(data);
|
||||
const parsed = AuthErrorSchema.safeParse(data);
|
||||
if (parsed.success) return parsed.data;
|
||||
|
||||
// Unwrap FastAPI's {detail: {code, message}} envelope
|
||||
if (typeof data === "object" && data !== null && "detail" in data) {
|
||||
const detail = (data as Record<string, unknown>).detail;
|
||||
const nested = authErrorSchema.safeParse(detail);
|
||||
const nested = AuthErrorSchema.safeParse(detail);
|
||||
if (nested.success) return nested.data;
|
||||
// Legacy string-detail responses
|
||||
if (typeof detail === "string") {
|
||||
return { code: "invalid_credentials", message: detail };
|
||||
} else if (Array.isArray(detail)) {
|
||||
// Handle list of error details (e.g. from Pydantic validation)
|
||||
const firstDetail = detail[0];
|
||||
if (typeof firstDetail === "object" && firstDetail !== null) {
|
||||
const errorDetail = ErrorDetailSchema.safeParse(firstDetail);
|
||||
if (errorDetail.success) {
|
||||
return { code: "invalid_credentials", message: errorDetail.data.msg };
|
||||
}
|
||||
}
|
||||
} else if (typeof detail === "object" && detail !== null) {
|
||||
const errorDetail = ErrorDetailSchema.safeParse(detail);
|
||||
if (errorDetail.success) {
|
||||
return { code: "invalid_credentials", message: errorDetail.data.msg };
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user