Promotes watchfiles from a transitive dep (uvicorn[standard]) to a
direct dep so that uv-based installs pull it in even when uvicorn's
optional extras are unselected. `requirements.txt` already listed it.
The lockfile churn beyond watchfiles is pre-existing drift: mem0ai was
added to pyproject.toml in #598 without re-locking, so `uv lock --check`
was already failing on main before this commit.
Review feedback on #611:
1. `Path.match` (used by uvicorn to filter reload candidates) does not
expand `**` on Python < 3.13, so the flat `WareHouse/*` default only
caught direct children — the agent-generated files that actually
triggered #569 live at `WareHouse/<project>/<file>` and deeper.
Expand defaults to multi-depth glob patterns (up to 10 levels) for
each excluded dir.
2. `--reload-exclude` is only honoured by the watchfiles-backed reloader;
without watchfiles uvicorn silently falls back to StatReload and
drops every exclude pattern. Add `watchfiles` to requirements.txt so
the filter works out of the box, and log a WARNING when --reload
runs without watchfiles installed instead of failing silently.
Test coverage:
- `TestExcludePatternDepth` parametrised over 4 WareHouse depths plus
logs/data/node_modules cases; also asserts real source paths like
`server/app.py` are NOT matched (no over-exclusion regression).
- `TestWatchfilesWarning` covers the new `_watchfiles_available` probe
and the WARNING log path.
20/20 tests pass; 8 new.
When ``python server_main.py --reload`` was used, uvicorn's default
reload_dirs is the current working directory and StatReload walks the
whole tree for ``*.py`` files. Agent-generated code under
``WareHouse/session_<uuid>/code_workspace/<file>.py`` therefore triggers
a server restart mid-workflow; the webui is left waiting indefinitely
and the in-flight session is cancelled.
The project already ships a warning in both READMEs telling users to
drop ``--reload``, but that is exactly the tool dev loops need.
Fix: pass an explicit ``reload_dirs`` list containing only the server's
Python source folders (check, entity, functions, mcp_example, runtime,
schema_registry, server, tools, utils, workflow) and a matching
``reload_excludes`` set (WareHouse, logs, data, temp, node_modules) so
watchfiles-backed installs also stop observing output directories.
Users can override either list via repeatable ``--reload-dir`` and
``--reload-exclude`` flags.
The reload-kwargs construction is extracted into a pure helper so the
behaviour is unit-tested without spinning up a real server; nine new
tests cover the default behaviour, user overrides, argparse wiring, and
that the returned lists are defensive copies.
README / README-zh have been updated to reflect the new default.
Fixes#569
WebSocketManager.send_message_sync is called from background worker threads
(via asyncio.get_event_loop().run_in_executor) during workflow execution — by
WebSocketLogger, ArtifactDispatcher, and WebPromptChannel.
Previous implementation:
try:
loop = asyncio.get_running_loop()
if loop.is_running():
asyncio.create_task(...) # path only reachable from main thread
else:
asyncio.run(...) # creates a NEW event loop
except RuntimeError:
asyncio.run(...) # also creates a new event loop
The problem: WebSocket objects are bound to the *main* uvicorn event loop.
asyncio.run() spins up a separate event loop and calls websocket.send_text()
there, which in Python 3.12 raises:
RuntimeError: Task got Future attached to a different loop
...causing all log/artifact/prompt messages emitted from workflow threads to be
silently dropped or to crash the worker thread.
Fix:
- Store the event loop that created the first WebSocket connection as
self._owner_loop (captured in connect(), which always runs on the main loop).
- send_message_sync schedules the coroutine on that loop via
asyncio.run_coroutine_threadsafe(), then waits with a 10 s timeout.
- Calling from the main thread still works (run_coroutine_threadsafe is safe
when called from any thread, including the loop thread itself).
Added 7 tests covering:
- send from main thread
- send from worker thread (verifies send_text runs on the owner loop thread)
- 8 concurrent workers with no lost messages
- send after disconnect does not crash
- send before connect (no owner loop) does not crash
- owner loop captured on first connect
- owner loop stable across multiple connects