Merge pull request #9299 from oraios/mcp-self-improvement

 Add agentic DevEnv (with extended MCP server for self-improvement)
This commit is contained in:
Andrey Antukh 2026-05-12 15:05:42 +02:00 committed by GitHub
commit 1e746add31
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
29 changed files with 1968 additions and 3 deletions

7
.gitignore vendored
View File

@ -15,6 +15,12 @@
.repl
/*.jpg
/*.md
!CHANGES.md
!CONTRIBUTING.md
!README.md
!AGENTS.md
!CODE_OF_CONDUCT.md
!SECURITY.md
/*.png
/*.svg
/*.sql
@ -86,3 +92,4 @@
/**/.yarn/*
/.pnpm-store
/.vscode
/.idea

2
.serena/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
/cache
/project.local.yml

View File

@ -0,0 +1,32 @@
# Creating Commits
## Message Format
```
:emoji: Subject line (imperative, capitalized, no period, ≤70 chars)
Body (clear, concise description)
Co-authored-by: <You (the LLM)>
```
## Commit Type Emojis
`:bug:` bug fix · `:sparkles:` enhancement · `:tada:` new feature · `:recycle:` refactor · `:lipstick:` cosmetic · `:ambulance:` critical fix · `:books:` docs · `:construction:` WIP · `:boom:` breaking · `:wrench:` config · `:zap:` perf · `:whale:` docker · `:paperclip:` other · `:arrow_up:` dep upgrade · `:arrow_down:` dep downgrade · `:fire:` removal · `:globe_with_meridians:` translations · `:rocket:` epic/highlight
## Changelog (CHANGES.md)
Update `CHANGES.md` for user-facing or notable changes. Add entry under the current unreleased version in the matching section (`### :boom:`, `### :sparkles:`, `### :bug:`, etc.).
Entry format:
```
- Description of change [Taiga #NNNN](https://tree.taiga.io/project/penpot/us/NNNN)
```
or for GitHub issues/PRs:
```
- Description of change [Github #NNNN](https://github.com/penpot/penpot/issues/NNNN)
```
Changes that affect the JavaScript plugin API must additionally be documented in `plugins/CHANGELOG.md`:
* Add an entry at the top of the file (unreleased section)
* Prefix entries that change the types/signatures in the API with `**plugin-types:**` and changes affecting the runtime with `**plugin-runtime:**`.

View File

@ -0,0 +1,30 @@
# Creating Pull Requests
Important: Before creating a PR, ensure that you are on a branch that is specific to the
issue or feature you are working on. If necessary, create a new branch.
## Title Format
PR titles follow the same convention as commit titles:
```
:emoji: Subject line (imperative, capitalized, no period, ≤70 chars)
```
See the `creating-commits` memory for the list of emoji codes.
## Description Format
The PR description must start with the following notice:
> **Note:** This PR was created with AI assistance as part of the Penpot MCP self-improvement initiative.
**Related Issues** section with a bullet list of linked issues:
```
In addition to sections summarising and explaining the changes in the PR, it should contain a section 'Relevant Issues' with a bullet list:
- Fixes #NNNN
- Resolves #NNNN
- Relates to #NNNN
```

View File

@ -0,0 +1,27 @@
You are working on the GitHub project penpot/penpot.
# Working with Penpot Designs via the JavaScript API
Before working with Penpot designs, call the `high_level_overview` tool of the Penpot MCP server.
It explains the API, which you can use to automate tasks via the `execute_code` tool.
# Dev Workflow
Memories:
- before creating a commit, read `creating-commits`.
- before creating a PR, read `creating-prs`.
# Frontend
Read the file `frontend/AGENTS.md` for an overview.
Memories:
- connection between the JavaScript API and the ClojureScript code: `frontend/js-api-to-cljs-binding`.
- executing ClojureScript code in the frontend: `frontend/cljs-repl`.
- programmatically navigating to a file in the workspace: `frontend/navigation`.
- handling Clojure compiler errors, runtime patching and debug helpers: `frontend/handling-errors-and-debugging`.
## Detecting Crashes
The Penpot frontend can crash silently from the JS API's perspective: `execute_code` calls return successfully, but 1-2s later the workspace becomes unusable (Internal Error page).
The `execute_code` tool then stops working, but `cljs_repl` still works. Use it to detect a crash via `(some? (:exception @app.main.store/state))`.
For details on handling crashes, read memory `frontend/handling-crashes`.

View File

@ -0,0 +1,76 @@
# ClojureScript REPL Access via shadow-cljs
Execute code in the REPL via the Penpot MCP's `cljs_repl` tool.
## Accessing App State
The main store is `app.main.store/state`. It contains workspace metadata, selection, UI state, etc.
However, **page objects are NOT in the main store atom**. They live behind derived refs.
### Top-level store keys (subset)
`:current-page-id`, `:current-file-id`, `:workspace-local`, `:workspace-global`,
`:workspace-trimmed-page`, `:workspace-undo`, `:workspace-guides`, `:workspace-layout`,
`:workspace-presence`, `:workspace-ready`, `:profile`, `:route`, etc.
**Notable absence:** There is no `:workspace-data` key in the store. The old path
`(get-in state [:workspace-data :pages-index page-id :objects])` does NOT work.
### Getting page objects — use `app.main.refs/workspace-page-objects`
```clojure
;; This is a derived ref (reactive lens). Deref it directly:
(let [objects @app.main.refs/workspace-page-objects
shape (get objects (parse-uuid "some-uuid-here"))]
(select-keys shape [:name :type :x :y :width :height :fills :strokes :rotation :opacity :frame-id :parent-id]))
```
### Getting the current selection
```clojure
;; Selection is in the main store under :workspace-local :selected
(let [state @app.main.store/state
selected (get-in state [:workspace-local :selected])]
(mapv str selected))
;; Returns vector of UUID strings for selected shapes
```
### Other useful store access
```clojure
;; Current page id
(:current-page-id @app.main.store/state)
;; Verify state is accessible
(some? @app.main.store/state) ;; should be true
;; workspace-local keys: :zoom :selected :hide-toolbar :last-selected :vbox
;; :highlighted :vport :expanded :selrect :zoom-inverse
```
### Shape data structure (internal ClojureScript representation)
Shape keys use kebab-case keywords (`:fill-color`, `:fill-opacity`, `:parent-id`, `:frame-id`).
The shape `:type` is a keyword like `:rect`, `:path`, `:text`, `:ellipse`, `:image`, `:bool`, `:svg-raw`, `:frame`, `:group`.
Note `:rect` in CLJS corresponds to "rectangle" in the JS Plugin API, and `:frame` corresponds to "board".
Component instance shapes additionally carry `:component-id` and `:component-file` directly, and `:component-root` flags the root of an instance. To navigate from a shape to its component, use `app.common.types.container/get-head-shape` (nearest head) or `get-instance-root` (outermost root) — these differ when instances are nested.
### Helper utilities (`app.plugins.utils`)
Despite living under `plugins/`, these are general-purpose lookup helpers usable from any CLJS:
- `locate-shape` — find a shape by file-id, page-id, id
- `locate-objects` — get the object tree for a page
- `locate-component` — resolve the component for a shape (walks to **outermost** instance root, not nearest head — beware when instances are nested)
- `locate-library-component` — direct lookup by file-id and component-id
- `locate-file` — look up a file by id from state
## Notes
- The `:main` build has multiple modules: shared, main, main-workspace, rasterizer, etc.
- `app.main.store/state` is a potok store (wrapping an okulary atom) created via `defonce`
- Use `timeout` to avoid hanging if the browser is disconnected
## Troubleshooting
`cljs_repl` may not connect to the right runtime when several are attached (e.g. workspace tab + rasterizer). Verify with `(.-title js/document)` — it should show your file name, not "Penpot - Rasterizer".
To list runtimes or target one by client-id, use `npx shadow-cljs clj-eval` from `/home/penpot/penpot/frontend`. It talks to the shadow-cljs JVM process, so unlike `cljs_repl` it has access to `shadow.cljs.devtools.api`:
```bash
printf '(shadow.cljs.devtools.api/repl-runtimes :main)\n' | timeout 10 npx shadow-cljs clj-eval --stdin
printf '(shadow.cljs.devtools.api/cljs-eval :main "<cljs-code>" {:client-id 5})\n' | timeout 10 npx shadow-cljs clj-eval --stdin
```

View File

@ -0,0 +1,41 @@
# Handling Penpot Frontend Crashes
When the Penpot frontend crashes, it usually shows the **Internal Error** page (title text "Something bad happened", class `main_ui_static__download-link`).
A typical error pattern is: Changes go through (JS API, `execute_code`), but about 1-2s later, an `update-file` request hits the backend with the change and gets rejected.
So be sure to check the status for a crash.
After a crash, `execute_code` is unusable (no instances connected), and any data in `storage` is lost, but `cljs_repl` keeps working!
## 1. Detect the crash
cljs REPL `(some? (:exception @app.main.store/state))` returns `true` when the Internal Error page is showing,
`false` on a healthy workspace (and after a successful reload).
## 2. Read the cause
The exception is stored at `(:exception @app.main.store/state)`. Useful keys:
- `:type`, `:code`, `:status` — error class (e.g. `:validation` / `:referential-integrity` / `400`)
- `:hint`, `:details` — human-readable explanation; `:details` typically contains a vector of validation problems with `:shape-id`, `:page-id`, `:args`, etc.
- `:uri` — the API endpoint that returned the error (e.g. `update-file`)
- `:app.main.errors/instance` — the underlying JS Error object
- `:app.main.errors/trace` — JS stack trace string (only shows the response-handling path, not the dispatch site that produced the bad change)
```
(let [ex (:exception @app.main.store/state)]
(select-keys ex [:type :code :status :hint :details :uri]))
```
For backend validation errors (`:type :validation`), `:details` is the most informative field — it tells you exactly which shape and which invariant was violated.
## 3. Recover and continue testing
Reload steps:
1. List tabs with `playwright:browser_tabs` (`action: list`) and find the Penpot workspace tab (URL contains `/#/workspace`, title ends in `- Penpot`).
2. If it isn't the current tab, select it via `playwright:browser_tabs` (`action: select`, `index: <n>`). The selected tab's URL then appears as "Page URL" in the result.
3. Reload by calling `playwright:browser_navigate` with that same URL.
4. Confirm recovery: `(some? (:exception @app.main.store/state))` should now return `false`.
Whether the offending change persists depends on the crash type:
For **backend-rejected changes** (e.g. `:type :validation`, 4xx on `update-file`), changes are NOT persisted. Reload restores the pre-crash state — safe to retry.

View File

@ -0,0 +1,49 @@
# Handling Errors and Debugging
## Finding source errors
You have access to two tools for finding errors in Clojure source code (which you may introduce yourself through edits):
1. cljs_compiler_output
2. clj_check_parentheses
The latter is needed because syntax errors in parentheses give an uninformative compiler error, and the second
tool can often find the exact location of such errors.
## Runtime patching with `set!`
Some frontend vars are deliberately mutable escape hatches for runtime instrumentation or circular-dependency patching.
From `cljs_repl`, use `set!` for temporary debugging of CLJS vars such as
`app.main.store/on-event`, `app.main.errors/reload-file`, `app.main.errors/is-plugin-error?`,
`app.main.errors/last-report`, or `app.main.errors/last-exception`.
These patches affect only the live browser runtime and disappear on reload or recompilation.
```clojure
;; Log non-noisy Potok events temporarily.
(set! app.main.store/on-event
(fn [event]
(when (potok.v2.core/event? event)
(.log js/console (potok.v2.core/repr-event event)))))
```
Restore mutable hooks after debugging, or reload the frontend. Use JVM `alter-var-root` only for JVM Clojure;
it is not the normal way to patch live CLJS browser vars.
## Browser-console debug namespace
In development, the JS console exposes `debug` helpers from `frontend/src/debug.cljs`:
```javascript
debug.set_logging("namespace", "debug");
debug.dump_state();
debug.dump_buffer();
debug.get_state(":workspace-local :selected");
debug.dump_objects();
debug.dump_object("Rect-1");
debug.dump_selected();
debug.dump_tree(true, true);
```
Visual workspace debug overlays can be toggled with `debug.toggle_debug("bounding-boxes")`, `"group"`, `"events"`, or `"rotation-handler"`; `debug.debug_all()` and `debug.debug_none()` toggle all visual aids.
For temporary source traces, prefer existing logging (`app.common.logging` / `app.util.logging`) or short-lived `prn`, `app.common.pprint/pprint`, `js/console.log`, or `js-debugger` calls. Remove temporary source instrumentation before committing.

View File

@ -0,0 +1,25 @@
# How the Plugin JS API connects to ClojureScript
## Type Definitions
- `plugins/libs/plugin-types/index.d.ts` contains TypeScript type declarations (e.g. `ShapeBase`, `LibraryComponent`).
- These are **type-only** — no runtime code. The actual objects are constructed in ClojureScript.
## Runtime Shape Proxy
- `frontend/src/app/plugins/shape.cljs` builds the JS shape proxy via `obj/reify`.
- Each method/property from the TS interface (e.g. `:component`, `:isComponentRoot`, `:componentHead`) is defined as a keyword entry in the `obj/reify` form, with a ClojureScript function as the implementation.
- The proxy is created by the `shape-proxy` function, which takes `plugin-id`, `file-id`, `page-id`, and shape `id`, and closes over them.
## Library Proxies
- `frontend/src/app/plugins/library.cljs` defines proxies for library types like `LibraryComponentProxy` (via `lib-component-proxy`), also using `obj/reify`.
- The proxy satisfies the `LibraryComponent` TS interface, exposing `.id`, `.name`, `.path`, etc.
## Circular Dependency Resolution
- `shape.cljs` and `library.cljs` have circular dependencies (shapes reference library component proxies and vice versa).
- `shape.cljs` declares forward references as mutable `def nil` vars (e.g. `(def lib-component-proxy nil)`, line 144).
- `frontend/src/app/plugins.cljs` patches them at load time: `(set! shape/lib-component-proxy library/lib-component-proxy)`.
- Same pattern for `lib-typography-proxy?` and `variant-proxy`.
## Key Domain Namespaces
- `app.common.types.component` (aliased `ctk`) — component predicates: `instance-root?`, `instance-head?`, `in-component-copy?`, `is-variant?`
- `app.common.types.container` (aliased `ctn`) — container/tree operations: `in-any-component?`, `get-instance-root`, `get-head-shape`, `inside-component-main?`
- `app.common.types.file` (aliased `ctf`) — file-level operations: `resolve-component`, `get-ref-shape`

View File

@ -0,0 +1,14 @@
# Navigating to a File in the Workspace
To programmatically open a file in the workspace, use `cljs_repl` with:
```clojure
(do (require '[app.main.data.common :as dcm])
(app.main.store/emit! (dcm/go-to-workspace
:team-id (parse-uuid "<team-id>")
:file-id (parse-uuid "<file-id>")
:page-id (parse-uuid "<page-id>"))))
```
**All three IDs are required.** You can get:
- `team-id` from `(:current-team-id @app.main.store/state)`
- `file-id` from the dashboard files: `(vals (:files @app.main.store/state))`
- `page-id` by fetching the file: `(get-in file-data [:data :pages])` via `(rp/cmd! :get-file {:id file-id :features (get @app.main.store/state :features)})`

141
.serena/project.yml Normal file
View File

@ -0,0 +1,141 @@
# the name by which the project can be referenced within Serena
project_name: "penpot"
# list of languages for which language servers are started; choose from:
# al ansible bash clojure cpp
# cpp_ccls crystal csharp csharp_omnisharp dart
# elixir elm erlang fortran fsharp
# go groovy haskell haxe hlsl
# java json julia kotlin lean4
# lua luau markdown matlab msl
# nix ocaml pascal perl php
# php_phpactor powershell python python_jedi python_ty
# r rego ruby ruby_solargraph rust
# scala solidity swift systemverilog terraform
# toml typescript typescript_vts vue yaml
# zig
# (This list may be outdated. For the current list, see values of Language enum here:
# https://github.com/oraios/serena/blob/main/src/solidlsp/ls_config.py
# For some languages, there are alternative language servers, e.g. csharp_omnisharp, ruby_solargraph.)
# Note:
# - For C, use cpp
# - For JavaScript, use typescript
# - For Free Pascal/Lazarus, use pascal
# Special requirements:
# Some languages require additional setup/installations.
# See here for details: https://oraios.github.io/serena/01-about/020_programming-languages.html#language-servers
# When using multiple languages, the first language server that supports a given file will be used for that file.
# The first language is the default language and the respective language server will be used as a fallback.
# Note that when using the JetBrains backend, language servers are not used and this list is correspondingly ignored.
languages:
- clojure
- typescript
- rust
# the encoding used by text files in the project
# For a list of possible encodings, see https://docs.python.org/3.11/library/codecs.html#standard-encodings
encoding: "utf-8"
# line ending convention to use when writing source files.
# Possible values: unset (use global setting), "lf", "crlf", or "native" (platform default)
# This does not affect Serena's own files (e.g. memories and configuration files), which always use native line endings.
line_ending:
# The language backend to use for this project.
# If not set, the global setting from serena_config.yml is used.
# Valid values: LSP, JetBrains
# Note: the backend is fixed at startup. If a project with a different backend
# is activated post-init, an error will be returned.
language_backend:
# whether to use project's .gitignore files to ignore files
ignore_all_files_in_gitignore: true
# advanced configuration option allowing to configure language server-specific options.
# Maps the language key to the options.
# Have a look at the docstring of the constructors of the LS implementations within solidlsp (e.g., for C# or PHP) to see which options are available.
# No documentation on options means no options are available.
ls_specific_settings: {}
# list of additional paths to ignore in this project.
# Same syntax as gitignore, so you can use * and **.
# Note: global ignored_paths from serena_config.yml are also applied additively.
ignored_paths: []
# whether the project is in read-only mode
# If set to true, all editing tools will be disabled and attempts to use them will result in an error
# Added on 2025-04-18
read_only: false
# list of tool names to exclude.
# This extends the existing exclusions (e.g. from the global configuration)
# Find the list of tools here: https://oraios.github.io/serena/01-about/035_tools.html
excluded_tools: []
# list of tools to include that would otherwise be disabled (particularly optional tools that are disabled by default).
# This extends the existing inclusions (e.g. from the global configuration).
# Find the list of tools here: https://oraios.github.io/serena/01-about/035_tools.html
included_optional_tools: []
# fixed set of tools to use as the base tool set (if non-empty), replacing Serena's default set of tools.
# This cannot be combined with non-empty excluded_tools or included_optional_tools.
# Find the list of tools here: https://oraios.github.io/serena/01-about/035_tools.html
fixed_tools: []
# list of mode names to that are always to be included in the set of active modes
# The full set of modes to be activated is base_modes + default_modes.
# If the setting is undefined, the base_modes from the global configuration (serena_config.yml) apply.
# Otherwise, this setting overrides the global configuration.
# Set this to [] to disable base modes for this project.
# Set this to a list of mode names to always include the respective modes for this project.
base_modes:
# list of mode names that are to be activated by default, overriding the setting in the global configuration.
# The full set of modes to be activated is base_modes (from global config) + default_modes + added_modes.
# If the setting is undefined/empty, the default_modes from the global configuration (serena_config.yml) apply.
# Otherwise, this overrides the setting from the global configuration (serena_config.yml).
# Therefore, you can set this to [] if you do not want the default modes defined in the global config to apply
# for this project.
# This setting can, in turn, be overridden by CLI parameters (--mode).
# See https://oraios.github.io/serena/02-usage/050_configuration.html#modes
default_modes:
# initial prompt for the project. It will always be given to the LLM upon activating the project
# (contrary to the memories, which are loaded on demand).
initial_prompt: |
CRITICAL: Always read the memory `critical-info` before you do anything else.
# time budget (seconds) per tool call for the retrieval of additional symbol information
# such as docstrings or parameter information.
# This overrides the corresponding setting in the global configuration; see the documentation there.
# If null or missing, use the setting from the global configuration.
symbol_info_budget:
# list of regex patterns which, when matched, mark a memory entry as readonly.
# Extends the list from the global configuration, merging the two lists.
read_only_memory_patterns: []
# list of regex patterns for memories to completely ignore.
# Matching memories will not appear in list_memories or activate_project output
# and cannot be accessed via read_memory or write_memory.
# To access ignored memory files, use the read_file tool on the raw file path.
# Extends the list from the global configuration, merging the two lists.
# Example: ["_archive/.*", "_episodes/.*"]
ignored_memory_patterns: []
# list of mode names to be activated additionally for this project, e.g. ["query-projects"]
# The full set of modes to be activated is base_modes (from global config) + default_modes + added_modes.
# See https://oraios.github.io/serena/02-usage/050_configuration.html#modes
added_modes:
# list of additional workspace folder paths for cross-package reference support (e.g. in monorepos).
# Paths can be absolute or relative to the project root.
# Each folder is registered as an LSP workspace folder, enabling language servers to discover
# symbols and references across package boundaries.
# Currently supported for: TypeScript.
# Example:
# additional_workspace_folders:
# - ../sibling-package
# - ../shared-lib
additional_workspace_folders: []

View File

@ -185,7 +185,12 @@ ENV CLJKONDO_VERSION=2026.04.15 \
BABASHKA_VERSION=1.12.208 \
CLJFMT_VERSION=0.16.4 \
PIXI_VERSION=0.67.2 \
GITHUB_CLI_VERSION=2.91.0
GITHUB_CLI_VERSION=2.91.0 \
UV_VERSION=0.11.9 \
UV_TOOL_DIR=/opt/uv/tools \
UV_TOOL_BIN_DIR=/opt/utils/bin \
UV_PYTHON_INSTALL_DIR=/opt/uv/python \
SERENA_VERSION=v1.3.0
RUN set -ex; \
ARCH="$(dpkg --print-architecture)"; \
@ -309,6 +314,31 @@ RUN set -ex; \
mv /tmp/mc /opt/utils/bin/; \
chmod +x /opt/utils/bin/mc;
# Install uv
RUN set -ex; \
ARCH="$(dpkg --print-architecture)"; \
case "${ARCH}" in \
aarch64|arm64) \
BINARY_URL="https://github.com/astral-sh/uv/releases/download/${UV_VERSION}/uv-aarch64-unknown-linux-musl.tar.gz"; \
;; \
amd64|x86_64) \
BINARY_URL="https://github.com/astral-sh/uv/releases/download/${UV_VERSION}/uv-x86_64-unknown-linux-musl.tar.gz"; \
;; \
*) \
echo "Unsupported arch: ${ARCH}"; \
exit 1; \
;; \
esac; \
curl -LfsSo /tmp/uv.tar.gz ${BINARY_URL}; \
cd /opt/utils/bin; \
tar -xf /tmp/uv.tar.gz --strip-components=1; \
rm -rf /tmp/uv.tar.gz;
# Install uv-managed tools
RUN set -ex; \
/opt/utils/bin/uv tool install -p 3.13 \
"serena-agent@${SERENA_VERSION}" \
--prerelease=allow;
################################################################################
## DEVENV BASE
@ -421,6 +451,11 @@ ENV LANG='C.UTF-8' \
JAVA_HOME="/opt/jdk" \
CARGO_HOME="/opt/cargo" \
RUSTUP_HOME="/opt/rustup" \
UV_TOOL_DIR="/opt/uv/tools" \
UV_TOOL_BIN_DIR="/opt/utils/bin" \
UV_PYTHON_INSTALL_DIR="/opt/uv/python" \
SERENA_HOME="/home/penpot/.serena" \
SERENA_CONTEXT="claude-code" \
PATH="/opt/jdk/bin:/opt/gh/bin:/opt/utils/bin:/opt/clojure/bin:/opt/node/bin:/opt/imagick/bin:/opt/cargo/bin:$PATH"
COPY --from=penpotapp/imagemagick:7.1.2-13 /opt/imagick /opt/imagick
@ -429,6 +464,7 @@ COPY --from=setup-jvm /opt/clojure /opt/clojure
COPY --from=setup-node /opt/node /opt/node
COPY --from=setup-utils /opt/utils /opt/utils
COPY --from=setup-utils /opt/gh /opt/gh
COPY --from=setup-utils /opt/uv /opt/uv
COPY --from=setup-rust /opt/cargo /opt/cargo
COPY --from=setup-rust /opt/rustup /opt/rustup
COPY --from=setup-rust /opt/emsdk /opt/emsdk
@ -444,6 +480,7 @@ COPY files/tmux.conf /root/.tmux.conf
COPY files/sudoers /etc/sudoers
COPY files/Caddyfile /home/
COPY files/serena_config.yml /home/serena_config.yml
COPY files/selfsigned.crt /home/
COPY files/selfsigned.key /home/
COPY files/start-tmux.sh /home/start-tmux.sh

View File

@ -57,6 +57,10 @@ services:
- 4201:4201
- 4202:4202
# Serena MCP server (agentic mode only)
- ${SERENA_EXTERNAL_PORT:-14281}:14281
- ${SERENA_DASHBOARD_EXTERNAL_PORT:-14282}:24282
environment:
- EXTERNAL_UID=${CURRENT_USER_ID}
# SMTP setup

View File

@ -10,7 +10,17 @@ cp /root/.bashrc /home/penpot/.bashrc
cp /root/.vimrc /home/penpot/.vimrc
cp /root/.tmux.conf /home/penpot/.tmux.conf
# Seed SERENA_HOME with default config on first run
mkdir -p ${SERENA_HOME}
if [ ! -f "${SERENA_HOME}/serena_config.yml" ]; then
cp /home/serena_config.yml "${SERENA_HOME}/serena_config.yml"
fi
chown -R penpot:users ${SERENA_HOME}
chown penpot:users /home/penpot
# we need to be able to install rust-analyzer and possibly other dependencies with rustup
chown -R penpot:ubuntu /opt/rustup
rsync -ar --chown=penpot:users /opt/cargo/ /home/penpot/.cargo/
export JAVA_OPTS="-Djava.net.preferIPv4Stack=true"

View File

@ -0,0 +1,153 @@
language_backend: LSP
# line ending convention to use when writing source files.
# Possible values: "lf" (Unix), "crlf" (Windows), "native" (platform default).
# Note that Serena's own files (e.g. memories and configuration files) always use native line endings.
# This setting can be overridden on a per-project basis in project.yml files.
line_ending: native
# whether to open a graphical window with Serena's logs.
# This is mainly supported on Windows and (partly) on Linux; not available on macOS.
# If you prefer a browser-based tool, use the `web_dashboard` option instead.
# Further information: https://oraios.github.io/serena/02-usage/060_dashboard.html
#
# Being able to inspect logs is useful both for troubleshooting and for monitoring the tool calls,
# especially when using the agno playground, since the tool calls are not always shown,
# and the input params are never shown in the agno UI.
# When used as MCP server for Claude Desktop, the logs are primarily for troubleshooting.
# Note: unfortunately, the various entities starting the Serena server or agent do so in
# mysterious ways, often starting multiple instances of the process without shutting down
# previous instances. This can lead to multiple log windows being opened, and only the last
# window being updated. Since we can't control how agno or Claude Desktop start Serena,
# we have to live with this limitation for now.
gui_log_window: false
# whether to start the Serena Dashboard, which provides detailed information on your Serena session,
# the current configuration and furthermore allows some settings to be conveniently modified on the fly.
# We strongly recommend to always enable this option!
# If you want to prevent the Dashboard window from being opened on launch,
# set `web_dashboard_open_on_launch` to false (see below).
# Further information: https://oraios.github.io/serena/02-usage/060_dashboard.html
web_dashboard: true
# whether to open the Dashboard window/browser tab when Serena starts (provided that web_dashboard is enabled).
# If set to false, you can still open the dashboard manually by clicking on the Serena icon in your system
# tray on Windows and macOS. On Linux, there is no system tray support, so you can only open the dashboard by
# a) telling the LLM to "open the dashboard" (provided that the open_dashboard tool is enabled) or by
# b) manually navigating to http://localhost:24282/dashboard/ in your web browser (actual port
# may be higher if you have multiple instances running; try ports 24283, 24284, etc.)
# See also: https://oraios.github.io/serena/02-usage/060_dashboard.html
web_dashboard_open_on_launch: false
# defines the interface (application mode) used for the web dashboard (if enabled).
# If empty/null, use platform-dependent default. Otherwise, possible values:
# * browser: the dashboard is opened in the default browser (if `web_dashboard_open_on_launch` is true)
# This is supported on all platforms.
# * app: the dashboard is opened in a separate native-like app window with accompanying tray icon, whose
# lifecycle is tied to the Serena process.
# If `web_dashboard_open_on_launch` is false, the dashboard can be conveniently accessed via the tray icon.
# This is supported on Windows and macOS, but note that on macOS, where tray icons are very visible,
# this may result in too many icons being displayed when using multi-agent setups.
# * tray_manager: use a global tray icon to provide access to the dashboards of all running Serena instances,
# opening the dashboard in browser tabs when selected from the tray menu.
# This is EXPERIMENTAL. It is tested on Windows only. We will establish macOS support, but it is yet untested.
# On Linux, this cannot be universally supported, but it may work in some desktop environments.
web_dashboard_interface:
# the address the web dashboard will listen on (bind address).
web_dashboard_listen_address: 0.0.0.0
# address where JetBrains plugin servers are running (only relevant when using the JetBrains language backend)
jetbrains_plugin_server_address: 127.0.0.1
# the minimum log level for the GUI log window and the dashboard (10 = debug, 20 = info, 30 = warning, 40 = error)
log_level: 20
# whether to trace the communication between Serena and the language servers.
# This is useful for debugging language server issues.
trace_lsp_communication: false
# advanced configuration option allowing to configure language server-specific options.
# Maps the language key to the options.
# Have a look at the docstring of the constructors of the LS implementations within solidlsp (e.g., for C# or PHP) to see which options are available.
# No documentation on options means no options are available.
ls_specific_settings: {}
# list of paths to ignore across all projects.
# Same syntax as gitignore, so you can use * and **.
# These patterns are merged additively with each project's own ignored_paths.
ignored_paths: []
# list of regex patterns which, when matched, mark a memory entry as readonly.
# For example, "global/.*" will mark all global memories as read-only.
# You can extend the list on a per-project basis in the project.yml configuration file.
read_only_memory_patterns: []
# list of regex patterns for memories to completely ignore.
# Matching memories will not appear in list_memories or activate_project output
# and cannot be accessed via read_memory or write_memory.
# To access ignored memory files, use the read_file tool on the raw file path.
# This is useful for projects with large numbers of archived memory files.
# You can extend the list on a per-project basis in the project.yml configuration file.
# Example: ["_archive/.*", "_episodes/.*"]
ignored_memory_patterns: []
# timeout, in seconds, after which tool executions are terminated
tool_timeout: 240
# list of tools to be globally excluded
excluded_tools: []
# list of optional tools (which are disabled by default) to be included
included_optional_tools: []
# fixed set of tools to use as the base tool set (if non-empty), replacing Serena's default set of tools.
# This cannot be combined with non-empty excluded_tools or included_optional_tools.
fixed_tools: []
# list of mode names to that are always to be included in the set of active modes
# The full set of modes to be activated is base_modes + default_modes.
# If this is undefined, no base modes are included.
# The project configuration (project.yml) may override this setting.
base_modes: [no-onboarding]
# list of mode names that are to be activated by default.
# The full set of modes to be activated is base_modes + default_modes.
# These modes can be overridden by the project configuration (project.yml) or through the CLI (--mode).
default_modes:
- interactive
- editing
default_max_tool_answer_chars: 150000
# the name of the token count estimator to use for tool usage statistics.
# See the `RegisteredTokenCountEstimator` enum for available options.
#
# By default, a very naive character count estimator is used, which simply counts the number of characters.
# You can configure this to TIKTOKEN_GPT4 to use a local tiktoken-based estimator for GPT-4 (will download tiktoken
# data files on first run), or ANTHROPIC_CLAUDE_SONNET_4 which will use the (free of cost) Anthropic API to
# estimate the token count using the Claude Sonnet 4 tokenizer.
token_count_estimator: CHAR_COUNT
# time budget (seconds) per tool call for the retrieval of additional symbol information
# such as docstrings or parameter information.
# (currently only used by LSP-based tools).
# If the budget is exceeded, Serena stops issuing further retrieval requests
# and returns partial info results.
# 0 disables the budget (no early stopping). Negative values are invalid.
# This is an advanced setting that can help alleviate problems with LSP servers
# that have a slow implementation of request_hover (clangd is one of those)
# or with tool calls that find very many symbols.
# Can be overridden in project.yml.
symbol_info_budget: 10
# template for the location of the per-project .serena data folder (memories, caches, etc.).
# Supports the following placeholders:
# $projectDir - the absolute path to the project root directory
# $projectFolderName - the name of the project directory
# Default: "$projectDir/.serena" (data stored inside the project directory)
# Example for a central location: "/projects-metadata/$projectFolderName/.serena"
project_serena_folder_location: "$projectDir/.serena"
# the list of registered project paths (updated automatically).
projects:
- /home/penpot/penpot

View File

@ -41,4 +41,22 @@ tmux select-window -t penpot:3
tmux send-keys -t penpot 'cd penpot/backend' enter C-l
tmux send-keys -t penpot './scripts/start-dev' enter
if echo "$PENPOT_FLAGS" | grep -q "enable-mcp"; then
pushd ~/penpot/mcp/
./scripts/setup;
pnpm run build;
popd
tmux new-window -t penpot:4 -n 'mcp'
tmux select-window -t penpot:4
tmux send-keys -t penpot 'cd penpot/mcp' enter C-l
tmux send-keys -t penpot './scripts/start-mcp-devenv' enter
fi
if [ "${SERENA_ENABLED:-false}" = "true" ]; then
tmux new-window -t penpot:5 -n 'serena'
tmux select-window -t penpot:5
tmux send-keys -t penpot "serena start-mcp-server --transport streamable-http --port 14281 --project penpot --context ${SERENA_CONTEXT} --host 0.0.0.0" enter
fi
tmux -2 attach-session -t penpot

View File

@ -0,0 +1,193 @@
---
title: 3.11. Agentic Development Environment
desc: Dive into agentic Penpot development.
---
# Agentic Development Environment
The agentic DevEnv is an extension of the standard DevEnv
(the [general DevEnv instructions](/technical-guide/developer/devenv/) apply),
which is optimised for AI agent-based development,
adding additional tools and processes that support agentic automation.
The general workflow is as follows:
1. Start the agentic DevEnv.
2. Start a debugging-enabled browser and open Penpot, using a Penpot user with
the remote MCP integration enabled.
3. Use an AI client (MCP client) which is connected to a suite of MCP servers
to solve development tasks.
## Capabilities
The agentic DevEnv leverages several MCP servers in order to provide AI agents
with a comprehensive toolbox for Penpot development:
* **Penpot MCP Server** provides tools for directly interacting with a live Penpot instance,
enabling the agent to
* execute JavaScript code in the frontend (using the plugin API),
* execute ClojureScript code in the frontend (REPL),
* import .penpot files for reproducing issues,
* export design elements as images, and more.
* **Serena MCP Server** provides code intelligence tools with support for Clojure and TypeScript.
Its memory system is used to organise project knowledge in a context-efficient manner.
* **Playwright MCP Server** provides tools for browser remote control.
* (optional) **GitHub MCP Server** provides tools for interacting with GitHub (issue, PRs, etc.)
Equipped with the tools provided by these MCP servers, the agent can fully close the development loop,
i.e. it can ...
* retrieve information on an issue from GitHub,
* import relevant design files for reproduction,
* execute JavaScript and ClojureScript code directly in Penpot in order to
* simulate user interactions (e.g. to reproduce an issue),
* test hypotheses on the root cause of an issue, and
* experiment with implementations before touching the actual codebase,
* detect, analyse and recover from crashes in the frontend,
* make code changes (using IDE-like symbolic operations)
* test the changes in the live Penpot instance, and
* create commits and PRs resolving the issue.
## Configuring and Starting the Agentic DevEnv
**First-Time Setup: Building the Image.** If you are starting the agentic DevEnv for the first time, you need to build
the updated docker image, adding support for agentic tools:
```bash
./manage.sh build-devenv --local
```
**Enable the Penpot MCP Connection in the Frontend.**
The agentic DevEnv relies on a connection between the Penpot frontend and the Penpot MCP server
being established automatically.
Edit the file `frontend/resources/public/js/config.js`,
creating it if it does not exist, and make sure the `penpotFlags` variable contains the
`enable-mcp` flag.
```javascript
var penpotFlags = "enable-mcp";
```
**Running the DevEnv in Agentic Mode.** Start the DevEnv in agentic mode with:
```bash
./manage.sh run-devenv-agentic
```
## Opening Penpot with Remote Debugging & MCP Enabled
**Enable Remote Debugging in Your Browser.**
Penpot needs to be opened in a browser that has remote debugging enabled.
In Chromium-based browsers (such as Google Chrome, Opera, Vivaldi, etc.),
this can be achieved by launching the browser with the `--remote-debugging-port` argument.
For most newer browsers, you will also need to specify a user data directory,
as using debugging with your regular browser profile is disallowed for security reasons.
```bash
google-chrome --remote-debugging-port=9222 --user-data-dir="$HOME/.chrome-debug-profile"
```
This enables the Playwright MCP server to connect to the browser and control it.
Verify that debugging was enabled correctly by navigating to `http://127.0.0.1:9222/json/version`.
If you change the port, adjust the MCP server configuration accordingly (see below).
Note: For security reasons, you should not enable remote debugging with a profile
that you use for regular browsing activities.
**Open Penpot with the MCP Integration Enabled.**
The Penpot instance in the DevEnv can be accessed at [https://localhost:3449](https://localhost:3449).
Once logged in, navigate to your account settings, click on "Integrations" in the sidebar, and enable the "MCP Server" toggle.
Note: You do not need to use the generated key (or the provided URL), as the MCP server in the agentic DevEnv is running in single-user mode and does not require authentication.
## Configuring Your AI Client
Your AI client needs to be configured to connect to the MCP servers that collectively provide the agent with the necessary tools for Penpot development.
Below, we exemplarily provide a JSON-based configuration snippet, using `mcp-remote` to wrap HTTP-based servers.
Most clients using JSON-based configuration (e.g. Copilot, JetBrains AI Assistant, Claude Desktop, Antigravity)
will work when inserting the server entries below into the client's configuration file.
If your client uses a different configuration format, extract the relevant information (i.e. server URLs or launch commands)
and configure the servers appropriately, referring to the documentation of your client.
```json
{
"mcpServers": {
"penpot": {
"command": "npx",
"args": ["-y", "mcp-remote", "http://localhost:4401/mcp", "--allow-http" ]
},
"serena-devenv": {
"command": "npx",
"args": ["-y", "mcp-remote", "http://localhost:14281/mcp", "--allow-http"]
},
"playwright": {
"command": "npx",
"args": ["@playwright/mcp@latest", "--cdp-endpoint=http://127.0.0.1:9222"]
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "TODO_your_token"
}
}
}
}
```
**Penpot MCP Server**
* The URL above connects directly to the server in the DevEnv, which runs in single-user mode.
You do not need to use the proxied URL or the user token that is provided by the Penpot UI.
**Serena MCP Server**
* You can access Serena's dashboard at [http://localhost:14282](http://localhost:14282)
**GitHub MCP Server**
* The use of this MCP server is optional. (Direct shell access to GitHub CLI can be used alternatively.)
* You need to provide a personal access token (PAT) with appropriate permissions:
* Create a token in your GitHub account settings [here](https://github.com/settings/personal-access-tokens).
* Choose the right resource owner: As a member of the `penpot` organisation, be sure to create a token where the resource owner is the organisation.
Otherwise, you will not be able to create pull requests or issues in the `penpot/penpot` repository.
* Grant the necessary permissions, e.g. read and write access to issues and pull requests.
## Working on Development Tasks
After having made the configuration changes, restart your AI client.
All four MCP servers should now be running and accessible to your client.
The agent's entrypoint for development is an activation of the `penpot` project with Serena.
Start by instructing your agent as follows,
> Activate project penpot.
and it should retrieve fundamental project information,
expecting further instructions on what to do.
**Always start your first prompt with these activation instructions**, as this bootstraps the agent's context.
### Checking MCP Server Operability
To check if all integrations are working correctly, you can perform a series of tests.
1. Open Penpot in the debugging-enabled browser and open a design file.
2. Ask the agent to activate the project (Serena project activation):
> Activate project penpot.
3. **Penpot MCP**
* Checking the connection to the Penpot frontend:
> Get an overview of the current page in Penpot by using the `execute_code` tool.
* Checking the ClojureScript REPL:
> Use the `cljs_repl` tool to check whether the Penpot frontend has crashed.
4. **Serena MCP**
* Checking Serena's symbolic tools:
> Use the `find_symbol` tool to find function `locate-shape` (cljs) and class `PenpotMcpServer` (ts)
* **Playwright MCP**
* Checking the connection to the browser:
> Use Playwright MCP server to find the Penpot browser tab.

View File

@ -108,13 +108,57 @@ function log-devenv {
}
function run-devenv-tmux {
local extra_env_args=()
while [[ $# -gt 0 ]]; do
case "$1" in
-e)
extra_env_args+=(-e "$2"); shift 2;;
-e*)
extra_env_args+=(-e "${1#-e}"); shift;;
*)
shift;;
esac
done
if [[ ! $(docker ps -f "name=penpot-devenv-main" -q) ]]; then
start-devenv
echo "Waiting for containers fully start (5s)..."
sleep 5;
fi
docker exec -ti penpot-devenv-main sudo -EH -u penpot PENPOT_PLUGIN_DEV=$PENPOT_PLUGIN_DEV /home/start-tmux.sh
docker exec -ti \
"${extra_env_args[@]}" \
penpot-devenv-main sudo -EH -u penpot PENPOT_PLUGIN_DEV=$PENPOT_PLUGIN_DEV /home/start-tmux.sh
}
function run-devenv-agentic {
local serena_context="desktop-app"
local serena_external_port="14281"
local serena_dashboard_external_port="14282"
while [[ $# -gt 0 ]]; do
case "$1" in
--serena-context)
serena_context="$2"; shift 2;;
*)
shift;;
esac
done
if [[ ! $(docker ps -f "name=penpot-devenv-main" -q) ]]; then
SERENA_EXTERNAL_PORT="$serena_external_port" \
SERENA_DASHBOARD_EXTERNAL_PORT="$serena_dashboard_external_port" \
start-devenv
echo "Waiting for containers fully start (5s)..."
sleep 5;
fi
run-devenv-tmux \
-e SERENA_ENABLED=true \
-e SERENA_CONTEXT="$serena_context" \
-e PENPOT_FLAGS="${PENPOT_FLAGS} enable-mcp"
}
function run-devenv-shell {
@ -358,6 +402,9 @@ function usage {
echo "- stop-devenv Stops the development oriented docker compose service."
echo "- drop-devenv Remove the development oriented docker compose containers, volumes and clean images."
echo "- run-devenv Attaches to the running devenv container and starts development environment"
echo " Optional -e flags are forwarded to 'docker exec' (e.g. -e MY_VAR=value)."
echo "- run-devenv-agentic Like run-devenv but with additional processes for agentic development enabled."
echo " Options: --serena-context CONTEXT (default: desktop-app)"
echo "- run-devenv-shell Attaches to the running devenv container and starts a bash shell."
echo "- isolated-shell Starts a bash shell in a new devenv container."
echo "- log-devenv Show logs of the running devenv docker compose service."
@ -405,6 +452,9 @@ case $1 in
run-devenv)
run-devenv-tmux ${@:2}
;;
run-devenv-agentic)
run-devenv-agentic ${@:2}
;;
run-devenv-shell)
run-devenv-shell ${@:2}
;;

View File

@ -272,6 +272,7 @@ The Penpot MCP server can be configured using environment variables.
| `PENPOT_MCP_REPL_PORT` | Port for the REPL server (development/debugging) | `4403` |
| `PENPOT_MCP_SERVER_ADDRESS` | Hostname or IP address via which clients can reach the MCP server | `localhost` |
| `PENPOT_MCP_REMOTE_MODE` | Enable remote mode (disables file system access). Set to `true` to enable. | `false` |
| `PENPOT_MCP_DEVENV` | Enable Penpot development environment tools. Set to `true` to enable. | `false` |
### Logging Configuration

View File

@ -5,7 +5,7 @@
"type": "module",
"main": "dist/index.js",
"scripts": {
"build:server": "esbuild src/index.ts --bundle --platform=node --target=node18 --format=esm --outfile=dist/index.js --external:@modelcontextprotocol/* --external:ws --external:express --external:class-transformer --external:class-validator --external:reflect-metadata --external:pino --external:pino-pretty --external:pino-loki --external:js-yaml --external:sharp",
"build:server": "esbuild src/index.ts --bundle --platform=node --target=node18 --format=esm --outfile=dist/index.js --external:@modelcontextprotocol/* --external:ws --external:express --external:class-transformer --external:class-validator --external:reflect-metadata --external:pino --external:pino-pretty --external:pino-loki --external:js-yaml --external:sharp --external:nrepl-client",
"build": "pnpm run build:server && node scripts/copy-resources.js",
"build:types": "tsc --emitDeclarationOnly --outDir dist",
"start": "node dist/index.js",
@ -29,6 +29,7 @@
"class-validator": "^0.14.3",
"express": "^5.1.0",
"js-yaml": "^4.1.1",
"nrepl-client": "^0.3.0",
"penpot-mcp": "file:..",
"pino": "^9.10.0",
"pino-loki": "^2.6.0",

View File

@ -0,0 +1,217 @@
import nreplClient from "nrepl-client";
import type { NreplConnection, NreplMessage } from "nrepl-client";
import { createLogger } from "./logger";
/**
* Result of evaluating a ClojureScript expression via nREPL.
*/
export interface NreplEvalResult {
/** the returned value(s) as strings */
values: string[];
/** captured stdout output */
out: string;
/** captured stderr output */
err: string;
/** the namespace after evaluation */
ns: string;
}
/**
* A client for communicating with a shadow-cljs nREPL server.
*
* This client maintains a persistent nREPL session, so that definitions,
* requires, and other state are preserved across evaluations providing
* a full REPL experience.
*/
export class NreplClient {
private static readonly NREPL_PORT = 3447;
private static readonly NREPL_HOST = "localhost";
private static readonly EVAL_TIMEOUT_MS = 30_000;
private readonly logger = createLogger("NreplClient");
/** the persistent connection to the nREPL server, established lazily */
private connection: NreplConnection | null = null;
/** the cloned session ID that persists state across evaluations */
private sessionId: string | null = null;
/**
* Evaluates a Clojure expression on the nREPL server within the persistent session.
*
* @param code - the Clojure expression to evaluate
* @returns the evaluation result
*/
async eval(code: string): Promise<NreplEvalResult> {
this.logger.debug("Evaluating Clojure expression: %s", code);
const conn = await this.ensureConnection();
const sessionId = await this.ensureSession(conn);
return new Promise<NreplEvalResult>((resolve, reject) => {
const timeout = setTimeout(() => {
reject(new Error(`nREPL evaluation timed out after ${NreplClient.EVAL_TIMEOUT_MS}ms`));
}, NreplClient.EVAL_TIMEOUT_MS);
conn.send({ op: "eval", code, session: sessionId }, (err: Error | null, result: NreplMessage[]) => {
clearTimeout(timeout);
if (err) {
reject(err);
return;
}
try {
resolve(this.parseEvalResult(result));
} catch (parseErr) {
reject(parseErr);
}
});
});
}
/**
* Evaluates a ClojureScript expression via the shadow-cljs CLJS eval API.
*
* The expression is wrapped in a call to `shadow.cljs.devtools.api/cljs-eval`
* targeting the `:main` build, so it is evaluated in the browser runtime.
*
* @param cljsCode - the ClojureScript expression to evaluate
* @returns the evaluation result
*/
async evalCljs(cljsCode: string): Promise<NreplEvalResult> {
// escape the CLJS code for embedding in a Clojure string
const escapedCode = cljsCode.replace(/\\/g, "\\\\").replace(/"/g, '\\"');
const wrappedCode = `(shadow.cljs.devtools.api/cljs-eval :main "${escapedCode}" {})`;
this.logger.debug("Evaluating CLJS expression via shadow-cljs: %s", cljsCode);
return this.eval(wrappedCode);
}
/**
* Closes the persistent connection and session, releasing all resources.
*/
async close(): Promise<void> {
if (this.connection) {
this.logger.info("Closing nREPL connection");
this.connection.end();
this.connection = null;
this.sessionId = null;
}
}
/**
* Ensures a connection to the nREPL server is established, creating one if necessary.
*
* If the existing connection has been closed or errored, a new one is created.
*/
private async ensureConnection(): Promise<NreplConnection> {
if (this.connection && !this.connection.destroyed) {
return this.connection;
}
// reset state since the old connection is gone
this.connection = null;
this.sessionId = null;
this.logger.info("Connecting to nREPL server at %s:%d", NreplClient.NREPL_HOST, NreplClient.NREPL_PORT);
return new Promise<NreplConnection>((resolve, reject) => {
const conn = nreplClient.connect({
port: NreplClient.NREPL_PORT,
host: NreplClient.NREPL_HOST,
});
conn.once("connect", () => {
this.connection = conn;
// handle unexpected disconnects so the next eval reconnects
conn.once("close", () => {
this.logger.warn("nREPL connection closed unexpectedly");
this.connection = null;
this.sessionId = null;
});
conn.once("error", (err: Error) => {
this.logger.error("nREPL connection error: %s", err);
this.connection = null;
this.sessionId = null;
});
resolve(conn);
});
conn.once("error", (err: Error) => {
reject(
new Error(
`Failed to connect to nREPL server at ${NreplClient.NREPL_HOST}:${NreplClient.NREPL_PORT}: ${err.message}`
)
);
});
});
}
/**
* Ensures a persistent nREPL session exists, cloning one from the server if necessary.
*
* A cloned session maintains its own state (namespace bindings, definitions, etc.)
* independently of other sessions.
*/
private async ensureSession(conn: NreplConnection): Promise<string> {
if (this.sessionId) {
return this.sessionId;
}
this.logger.info("Cloning new nREPL session");
return new Promise<string>((resolve, reject) => {
conn.clone((err: Error | null, result: NreplMessage[]) => {
if (err) {
reject(new Error(`Failed to clone nREPL session: ${err.message}`));
return;
}
const sessionMsg = result.find((msg) => msg["new-session"] !== undefined) as any;
if (!sessionMsg) {
reject(new Error("nREPL clone response did not contain a new session ID"));
return;
}
this.sessionId = sessionMsg["new-session"];
this.logger.info("Cloned nREPL session: %s", this.sessionId);
resolve(this.sessionId!);
});
});
}
/**
* Parses the raw nREPL response messages into a structured result.
*/
private parseEvalResult(messages: NreplMessage[]): NreplEvalResult {
const values: string[] = [];
const outParts: string[] = [];
const errParts: string[] = [];
let ns = "user";
for (const msg of messages) {
if (msg.value !== undefined) {
values.push(msg.value);
}
if (msg.out) {
outParts.push(msg.out);
}
if (msg.err) {
errParts.push(msg.err);
}
if (msg.ns) {
ns = msg.ns;
}
if (msg.ex) {
throw new Error(`nREPL evaluation error: ${msg.ex}${msg.err ? "\n" + msg.err : ""}`);
}
}
return {
values,
out: outParts.join(""),
err: errParts.join(""),
ns,
};
}
}

View File

@ -11,6 +11,11 @@ import { HighLevelOverviewTool } from "./tools/HighLevelOverviewTool";
import { PenpotApiInfoTool } from "./tools/PenpotApiInfoTool";
import { ExportShapeTool } from "./tools/ExportShapeTool";
import { ImportImageTool } from "./tools/ImportImageTool";
import { CljsReplTool } from "./tools/CljsReplTool";
import { ImportPenpotFileTool } from "./tools/ImportPenpotFileTool";
import { CljsCompilerOutputTool } from "./tools/CljsCompilerOutputTool";
import { CljCheckParentheses } from "./tools/CljCheckParentheses";
import { NreplClient } from "./NreplClient";
import { ReplServer } from "./ReplServer";
import { ApiDocs } from "./ApiDocs";
@ -151,6 +156,16 @@ export class PenpotMcpServer {
return !this.isRemoteMode();
}
/**
* Indicates whether the server is running in a Penpot development environment.
*
* When enabled (by setting the environment variable PENPOT_MCP_DEVENV to "true"),
* additional developer tools such as ClojureScript expression evaluation are exposed.
*/
public isDevEnv(): boolean {
return process.env.PENPOT_MCP_DEVENV === "true";
}
/**
* Retrieves the high-level overview instructions explaining core Penpot usage.
*/
@ -177,6 +192,13 @@ export class PenpotMcpServer {
if (this.isFileSystemAccessEnabled()) {
toolInstances.push(new ImportImageTool(this));
}
if (this.isDevEnv()) {
const nreplClient = new NreplClient();
toolInstances.push(new CljsReplTool(this, nreplClient));
toolInstances.push(new ImportPenpotFileTool(this, nreplClient));
toolInstances.push(new CljsCompilerOutputTool(this, nreplClient));
toolInstances.push(new CljCheckParentheses(this));
}
return toolInstances.map((instance) => {
this.logger.info(`Registering tool: ${instance.getToolName()}`);
@ -341,6 +363,7 @@ export class PenpotMcpServer {
this.app.listen(this.port, this.host, async () => {
this.logger.info(`Multi-user mode: ${this.isMultiUserMode()}`);
this.logger.info(`Remote mode: ${this.isRemoteMode()}`);
this.logger.info(`DevEnv mode: ${this.isDevEnv()}`);
this.logger.info(`Modern Streamable HTTP endpoint: http://${this.host}:${this.port}/mcp`);
this.logger.info(`Legacy SSE endpoint: http://${this.host}:${this.port}/sse`);
this.logger.info(`WebSocket server URL: ws://${this.host}:${this.webSocketPort}`);

View File

@ -0,0 +1,242 @@
import { z } from "zod";
import { Tool } from "../Tool";
import "reflect-metadata";
import type { ToolResponse } from "../ToolResponse";
import { TextResponse } from "../ToolResponse";
import type { PenpotMcpServer } from "../PenpotMcpServer";
import * as fs from "fs";
/**
* Arguments for the FindUnclosedParensTool.
*/
export class CljCheckParenthesesArgs {
static schema = {
file: z.string().min(1).describe("Absolute path to a Clojure/ClojureScript source file"),
};
file!: string;
}
interface OpenDelim {
id: number;
line: number; // 0-based
col: number; // 0-based
char: string;
baselineKey: string; // the baseline key this delimiter owns
}
interface ParenIssue {
line: number; // 1-based
col: number; // 1-based
char: string;
detectedAtLine?: number; // 1-based line where the stack-state mismatch was observed
}
/**
* Finds unclosed delimiters in Clojure/ClojureScript source files using a
* stack-state invariant derived from cljfmt formatting conventions.
*
* Invariant: in cljfmt-formatted code, every opening delimiter of type T at
* column C must see the same stack depth each time that (T, C) combination
* occurs. A depth mismatch means delimiters opened between the baseline
* occurrence and the current one were never closed.
*
* The parser correctly handles string literals (including multi-line and escape
* sequences), comment lines, character literals, and regex literals.
*/
export class CljCheckParentheses extends Tool<CljCheckParenthesesArgs> {
constructor(mcpServer: PenpotMcpServer) {
super(mcpServer, CljCheckParenthesesArgs.schema);
}
public getToolName(): string {
return "clj_check_parentheses";
}
public getToolDescription(): string {
return "Analyzes a Clojure/ClojureScript source file for unclosed delimiters and reports the area of interest.";
}
protected async executeCore(args: CljCheckParenthesesArgs): Promise<ToolResponse> {
const filePath = args.file;
if (!fs.existsSync(filePath)) {
return new TextResponse(`File not found: ${filePath}`);
}
const content = fs.readFileSync(filePath, "utf-8");
const issues = analyzeParens(content);
if (issues.length === 0) {
return new TextResponse("All delimiters are properly balanced.");
}
const sourceLines = content.split("\n");
const parts: string[] = [`Found ${issues.length} unclosed delimiter(s):\n`];
for (const issue of issues) {
const srcLine = (sourceLines[issue.line - 1] ?? "").trimEnd();
const pointer = " ".repeat(String(issue.line).length) + " " + " ".repeat(issue.col - 1) + "^";
if (issue.detectedAtLine != null) {
const detectedSrcLine = (sourceLines[issue.detectedAtLine - 1] ?? "").trimEnd();
parts.push(
` Unclosed '${issue.char}' at line ${issue.line}, col ${issue.col}:\n` +
` ${issue.line} | ${srcLine}\n` +
` ${pointer}\n` +
` Stack-state mismatch detected at line ${issue.detectedAtLine}:\n` +
` ${issue.detectedAtLine} | ${detectedSrcLine}\n`
);
} else {
parts.push(
` Unclosed '${issue.char}' at line ${issue.line}, col ${issue.col} (still open at end of file):\n` +
` ${issue.line} | ${srcLine}\n` +
` ${pointer}\n`
);
}
}
return new TextResponse(parts.join("\n"));
}
}
/**
* Analyses delimiter balance in a Clojure/ClojureScript source string.
*
* Algorithm
* ---------
* Maintain a stack of open delimiters and a map from (delimiter-type, column)
* to the stack depth recorded on the first occurrence of that combination.
*
* Each time an opening delimiter of type T appears at column C:
* 1. Look up the key (T, C) in the map.
* 2. If absent, record the current stack depth as the baseline.
* 3. If present, compare the current depth with the baseline.
* - If deeper: the extra stack entries (from baseline depth to current
* depth) are delimiters that should have been closed. Report them.
* - If shallower: more delimiters were closed than opened between the
* baseline and here (over-closed). Update the baseline downward so
* subsequent occurrences don't cascade.
* 4. Push the delimiter onto the stack.
*
* After the full file is processed, any delimiter still on the stack is
* unclosed. If it was already reported via a mismatch, the report includes
* the detection line; otherwise it is reported as open-at-EOF.
*/
function analyzeParens(content: string): ParenIssue[] {
// Precompute line-start offsets for O(1) column lookup.
const lineStarts: number[] = [0];
for (let i = 0; i < content.length; i++) {
if (content[i] === "\n") lineStarts.push(i + 1);
}
let nextId = 0;
const stack: OpenDelim[] = [];
// (type, column) → baseline stack depth.
// Each baseline is owned by the delimiter that established it (stored
// as baselineKey on the stack entry). When that delimiter is popped,
// its baseline is discarded — it was scoped to that delimiter's lifetime.
const baseline: Map<string, number> = new Map();
let inString = false;
let inComment = false;
let escape = false;
let currentLine = 0;
for (let i = 0; i < content.length; i++) {
const ch = content[i];
// ── Newline ──────────────────────────────────────────────────────
if (ch === "\n") {
inComment = false;
currentLine++;
if (!inString) escape = false;
continue;
}
// ── Escape: skip next character ──────────────────────────────────
if (escape) {
escape = false;
continue;
}
// ── Inside comment: skip until newline ───────────────────────────
if (inComment) continue;
// ── Inside string literal ────────────────────────────────────────
if (inString) {
if (ch === "\\") escape = true;
else if (ch === '"') inString = false;
continue;
}
// ── Outside string / comment ─────────────────────────────────────
if (ch === "\\") {
escape = true;
continue;
}
if (ch === '"') {
inString = true;
continue;
}
if (ch === ";") {
inComment = true;
continue;
}
// ── Opening delimiter ────────────────────────────────────────────
if (ch === "(" || ch === "[" || ch === "{") {
const col = i - lineStarts[currentLine];
const key = `${ch}:${col}`;
const currentDepth = stack.length;
const recorded = baseline.get(key);
if (recorded !== undefined && currentDepth > recorded) {
// Stack is deeper than expected. The entries from index
// `recorded` to `currentDepth - 1` are unclosed delimiters
// that should have been closed before reaching this
// position. Return immediately — further parsing would be
// against a corrupted stack and only produce cascading noise.
return stack.slice(recorded, currentDepth).map((delim) => ({
line: delim.line + 1,
col: delim.col + 1,
char: delim.char,
detectedAtLine: currentLine + 1,
}));
}
// Establish or re-establish the baseline for this key,
// owned by this delimiter. Discarded when it is popped.
baseline.set(key, currentDepth);
stack.push({
id: nextId++,
line: currentLine,
col,
char: ch,
baselineKey: key,
});
}
// ── Closing delimiter ────────────────────────────────────────────
else if (ch === ")" || ch === "]" || ch === "}") {
if (stack.length > 0) {
const closed = stack.pop()!;
// The baseline this delimiter owned is no longer valid —
// the context it was recorded in has closed.
baseline.delete(closed.baselineKey);
}
}
}
// ── EOF: no mismatch was found, but the stack is not empty ──────────
// This happens when the unclosed delimiter has no second occurrence of
// the same (type, column) to compare against (e.g. last form in file).
return stack.map((delim) => ({
line: delim.line + 1,
col: delim.col + 1,
char: delim.char,
}));
}

View File

@ -0,0 +1,52 @@
import { Tool, EmptyToolArgs } from "../Tool";
import "reflect-metadata";
import type { ToolResponse } from "../ToolResponse";
import { TextResponse } from "../ToolResponse";
import { PenpotMcpServer } from "../PenpotMcpServer";
import { NreplClient } from "../NreplClient";
/**
* Reports the compiler status of the shadow-cljs `:main` build.
*
* If the most recent build failed, returns the relevant fields of the failure data
* (tag, message, resource name, line, column, etc.); otherwise returns `:ok`.
*/
export class CljsCompilerOutputTool extends Tool<EmptyToolArgs> {
private static readonly STATUS_CODE =
"(require (quote [shadow.cljs.devtools.api :as shadow])) " +
"(let [fd (-> (shadow/get-worker :main) :state-ref deref :failure-data)] " +
"(if fd (pr-str fd) :ok))";
private readonly nreplClient: NreplClient;
constructor(mcpServer: PenpotMcpServer, nreplClient: NreplClient) {
super(mcpServer, EmptyToolArgs.schema);
this.nreplClient = nreplClient;
}
public getToolName(): string {
return "cljs_compiler_output";
}
public getToolDescription(): string {
return (
"Reports the status of the most recent shadow-cljs `:main` build. " +
"Use this to diagnose compilation errors when needed. For syntax errors, " +
"consider using the clj_check_parentheses tool on the relevant source files."
);
}
protected async executeCore(_args: EmptyToolArgs): Promise<ToolResponse> {
const result = await this.nreplClient.eval(CljsCompilerOutputTool.STATUS_CODE);
// multiple top-level forms produce multiple values; the build status is the last one
const status = result.values[result.values.length - 1] ?? "nil";
const parts: string[] = [status];
if (result.err) {
parts.push(`stderr:\n${result.err}`);
}
return new TextResponse(parts.join("\n\n"));
}
}

View File

@ -0,0 +1,75 @@
import { z } from "zod";
import { Tool } from "../Tool";
import "reflect-metadata";
import type { ToolResponse } from "../ToolResponse";
import { TextResponse } from "../ToolResponse";
import { PenpotMcpServer } from "../PenpotMcpServer";
import { NreplClient } from "../NreplClient";
/**
* Arguments for the CljsReplTool.
*/
export class CljsReplArgs {
static schema = {
code: z.string().min(1, "Code cannot be empty"),
};
/**
* The ClojureScript code to evaluate in the frontend runtime.
*/
code!: string;
}
/**
* A ClojureScript REPL for the Penpot frontend runtime.
*
* This tool provides a persistent REPL session connected to the shadow-cljs nREPL server.
* Definitions, requires, and other state are preserved across calls, enabling iterative
* exploration and manipulation of the running Penpot application.
*/
export class CljsReplTool extends Tool<CljsReplArgs> {
private readonly nreplClient: NreplClient;
/**
* Creates a new CljsReplTool instance.
*
* @param mcpServer - the MCP server instance
* @param nreplClient - the nREPL client for communicating with shadow-cljs
*/
constructor(mcpServer: PenpotMcpServer, nreplClient: NreplClient) {
super(mcpServer, CljsReplArgs.schema);
this.nreplClient = nreplClient;
}
public getToolName(): string {
return "cljs_repl";
}
public getToolDescription(): string {
return (
"Persistent ClojureScript REPL in the Penpot frontend runtime (via shadow-cljs nREPL). " +
"Definitions, requires, and state are preserved across calls — use it to build up helpers incrementally. " +
"Multiple top-level expressions per call are supported; each produces a result line."
);
}
protected async executeCore(args: CljsReplArgs): Promise<ToolResponse> {
const result = await this.nreplClient.evalCljs(args.code);
const parts: string[] = [];
if (result.values.length > 0) {
parts.push(result.values.join("\n"));
}
if (result.out) {
parts.push(`stdout:\n${result.out}`);
}
if (result.err) {
parts.push(`stderr:\n${result.err}`);
}
if (parts.length === 0) {
parts.push("nil");
}
return new TextResponse(parts.join("\n\n"));
}
}

View File

@ -0,0 +1,370 @@
import { z } from "zod";
import { Tool } from "../Tool";
import { TextResponse, ToolResponse } from "../ToolResponse";
import "reflect-metadata";
import { PenpotMcpServer } from "../PenpotMcpServer";
import { NreplClient } from "../NreplClient";
import { createLogger } from "../logger";
import * as crypto from "crypto";
import * as fs from "fs";
import * as path from "path";
import * as https from "https";
import * as http from "http";
/**
* Arguments for ImportPenpotFileTool.
*/
export class ImportPenpotFileArgs {
static schema = {
url: z.url().describe("URL of the .penpot file to import."),
};
/** URL of the .penpot file to import */
url!: string;
}
/**
* Tool for importing a .penpot file into the running Penpot instance.
*
* Downloads the file from the given URL to a temporary location in the frontend's
* static directory, then triggers the import via the Penpot frontend's web worker
* using the ClojureScript REPL. The temporary file is cleaned up after the import
* completes (or fails).
*
* Only available in devenv mode, as it requires the ClojureScript nREPL connection.
*/
export class ImportPenpotFileTool extends Tool<ImportPenpotFileArgs> {
private static readonly POLL_INTERVAL_MS = 1_000;
private static readonly IMPORT_TIMEOUT_MS = 120_000;
// assumes cwd is the server package root (same assumption as ConfigurationLoader)
private static readonly PUBLIC_DIR = path.resolve("../../../frontend/resources/public");
private static readonly NAVIGATION_HINT =
"To open an imported file in the workspace, use cljs_repl with:\n" +
"(do (require '[app.main.data.common :as dcm])\n" +
" (app.main.store/emit! (dcm/go-to-workspace\n" +
' :team-id (parse-uuid "<team-id>")\n' +
' :file-id (parse-uuid "<file-id>")\n' +
' :page-id (parse-uuid "<page-id>"))))';
private readonly log = createLogger("ImportPenpotFileTool");
private readonly nreplClient: NreplClient;
/**
* Creates a new ImportPenpotFileTool instance.
*
* @param mcpServer - the MCP server instance
* @param nreplClient - the nREPL client for communicating with shadow-cljs
*/
constructor(mcpServer: PenpotMcpServer, nreplClient: NreplClient) {
super(mcpServer, ImportPenpotFileArgs.schema);
this.nreplClient = nreplClient;
}
public getToolName(): string {
return "import_penpot_file";
}
public getToolDescription(): string {
return (
"Imports a .penpot file into the running Penpot instance from a given URL. " +
"The file is imported into the user's Drafts project. " +
"Returns the name(s) of the imported file(s)."
);
}
protected async executeCore(args: ImportPenpotFileArgs): Promise<ToolResponse> {
// generate a random filename for the temporary file
const randomName = `_import_${crypto.randomUUID()}.penpot`;
const tempFilePath = path.join(ImportPenpotFileTool.PUBLIC_DIR, randomName);
const servePath = `/${randomName}`;
try {
// download the file
this.log.info("Downloading .penpot file from %s", args.url);
await this.downloadFile(args.url, tempFilePath);
const fileSize = fs.statSync(tempFilePath).size;
this.log.info("Downloaded %d bytes to %s", fileSize, tempFilePath);
// set up the import via CLJS REPL
const atomName = `import-result-${crypto.randomUUID().slice(0, 8)}`;
const setupCode = this.buildImportCode(atomName, servePath);
this.log.info("Initiating import via CLJS REPL");
const setupResult = await this.nreplClient.evalCljs(setupCode);
this.log.debug("CLJS setup result: %s", JSON.stringify(setupResult));
// check for immediate errors in the setup
if (setupResult.err) {
throw new Error(`CLJS evaluation error: ${setupResult.err}`);
}
// poll for the import result
const result = await this.pollForResult(atomName);
return new TextResponse(result);
} finally {
// clean up the temporary file
this.cleanupTempFile(tempFilePath);
}
}
/**
* Builds the ClojureScript code that fetches the file from the static directory,
* creates a blob URL, and triggers the import via the web worker.
*
* @param atomName - unique name for the result atom
* @param servePath - the URL path to fetch the file from (same-origin)
* @returns the ClojureScript code string
*/
private buildImportCode(atomName: string, servePath: string): string {
// escape for embedding in a CLJS string
const escapedPath = servePath.replace(/\\/g, "\\\\").replace(/"/g, '\\"');
const escapedAtom = atomName.replace(/\\/g, "\\\\").replace(/"/g, '\\"');
return `
(do
(require '[app.main.store :as st])
(require '[app.main.worker :as mw])
(require '[app.common.uuid :as uuid])
(require '[beicon.v2.core :as rx])
(def ${escapedAtom} (atom {:status :pending}))
(let [project-id (->> @st/state :projects vals (filter :is-default) first :id)
file-ids-before (set (keys (:files @st/state)))]
(-> (js/fetch "${escapedPath}")
(.then (fn [resp]
(when-not (.-ok resp)
(reset! ${escapedAtom} {:status :error :error (str "Fetch failed: " (.-status resp))})
(throw (js/Error. (str "Fetch failed: " (.-status resp)))))
(.blob resp)))
(.then (fn [blob]
(let [uri (js/URL.createObjectURL blob)
file-id (uuid/next)
entries [{:file-id file-id
:name "import"
:type :binfile-v3
:uri uri}]]
(->> (mw/ask-many!
{:cmd :import-files
:project-id project-id
:files entries
:features (get @st/state :features)})
(rx/subs!
(fn [msg]
(when (= :finish (:status msg))
(reset! ${escapedAtom}
{:status :success
:file-ids-before file-ids-before})))
(fn [err]
(reset! ${escapedAtom} {:status :error :error (str err)}))
(fn []
(when (= :pending (:status @${escapedAtom}))
(reset! ${escapedAtom} {:status :error :error "Stream completed without success message"}))))))))
(.catch (fn [err]
(when (= :pending (:status @${escapedAtom}))
(reset! ${escapedAtom} {:status :error :error (str err)}))))))
:initiated)
`;
}
/**
* Builds the ClojureScript code that resolves the imported file details.
*
* Refreshes the dashboard, diffs the file list against the pre-import snapshot,
* and for each new file fetches the first page-id via the backend API.
*
* @param atomName - the atom holding the import result (including :file-ids-before)
* @param resultAtomName - the atom to store the final file details in
* @returns the ClojureScript code string
*/
private buildResolveCode(atomName: string, resultAtomName: string): string {
return `
(do
(require '[app.main.store :as st])
(require '[app.main.repo :as rp])
(require '[app.main.data.dashboard :as dd])
(require '[beicon.v2.core :as rx])
(def ${resultAtomName} (atom {:status :pending}))
(let [file-ids-before (:file-ids-before @${atomName})
team-id (:current-team-id @st/state)]
;; refresh dashboard files
(st/emit! (dd/fetch-recent-files))
;; wait a moment for the state to update, then resolve
(js/setTimeout
(fn []
(let [all-files (vals (:files @st/state))
new-files (remove #(contains? file-ids-before (:id %)) all-files)
file-count (count new-files)]
(if (zero? file-count)
(reset! ${resultAtomName} {:status :success :files []})
;; fetch page-ids for each new file
(let [remaining (atom file-count)
results (atom [])]
(doseq [f new-files]
(->> (rp/cmd! :get-file {:id (:id f) :features (get @st/state :features)})
(rx/subs!
(fn [file-data]
(swap! results conj
{:file-id (str (:id f))
:name (:name f)
:team-id (str team-id)
:page-id (str (first (get-in file-data [:data :pages])))})
(when (zero? (swap! remaining dec))
(reset! ${resultAtomName} {:status :success :files @results})))
(fn [err]
(swap! results conj
{:file-id (str (:id f))
:name (:name f)
:team-id (str team-id)
:error (str err)})
(when (zero? (swap! remaining dec))
(reset! ${resultAtomName} {:status :success :files @results}))))))))))
500))
:initiated)
`;
}
/**
* Polls the CLJS atom for the import result until it succeeds, fails, or times out.
* On success, resolves the imported file details (server-side IDs, names, page-ids).
*
* @param atomName - the name of the atom to poll
* @returns a JSON string with the imported file details
*/
private async pollForResult(atomName: string): Promise<string> {
const startTime = Date.now();
// phase 1: wait for the import to complete
while (Date.now() - startTime < ImportPenpotFileTool.IMPORT_TIMEOUT_MS) {
await this.sleep(ImportPenpotFileTool.POLL_INTERVAL_MS);
const pollResult = await this.nreplClient.evalCljs(`(pr-str @${atomName})`);
const resultStr = pollResult.values.join("");
this.log.debug(`Poll result: ${resultStr}`);
if (resultStr.includes(":success")) {
this.log.info("Import succeeded, resolving file details...");
return await this.resolveImportedFiles(atomName);
} else if (resultStr.includes(":error")) {
this.log.error(`Import failed: ${resultStr}`);
throw new Error(`Import failed: ${resultStr}`);
}
}
throw new Error(`Import timed out after ${ImportPenpotFileTool.IMPORT_TIMEOUT_MS / 1000} seconds`);
}
/**
* After a successful import, resolves the actual server-side file details
* by diffing the dashboard file list and fetching page IDs.
*
* @param atomName - the atom holding the import result with :file-ids-before
* @returns a JSON string with the imported file details
*/
private async resolveImportedFiles(atomName: string): Promise<string> {
const resultAtomName = `import-details-${crypto.randomUUID().slice(0, 8)}`;
const resolveCode = this.buildResolveCode(atomName, resultAtomName);
await this.nreplClient.evalCljs(resolveCode);
// poll the result atom
const startTime = Date.now();
const resolveTimeoutMs = 15_000;
while (Date.now() - startTime < resolveTimeoutMs) {
await this.sleep(ImportPenpotFileTool.POLL_INTERVAL_MS);
const pollResult = await this.nreplClient.evalCljs(`(pr-str @${resultAtomName})`);
const resultStr = pollResult.values.join("");
if (resultStr.includes(":success")) {
this.log.info("File details resolved");
return resultStr + "\n\n" + ImportPenpotFileTool.NAVIGATION_HINT;
}
}
this.log.warn("Timed out resolving file details, returning basic success");
return "Import succeeded but could not resolve file details.";
}
/**
* Downloads a file from a URL to a local path.
*
* @param url - the URL to download from
* @param destPath - the local file path to write to
*/
private downloadFile(url: string, destPath: string): Promise<void> {
return new Promise((resolve, reject) => {
const client = url.startsWith("https") ? https : http;
const file = fs.createWriteStream(destPath);
const request = client.get(url, (response) => {
// handle redirects
if (
response.statusCode &&
response.statusCode >= 300 &&
response.statusCode < 400 &&
response.headers.location
) {
file.close();
fs.unlinkSync(destPath);
this.downloadFile(response.headers.location, destPath).then(resolve, reject);
return;
}
if (response.statusCode && response.statusCode !== 200) {
file.close();
fs.unlinkSync(destPath);
reject(new Error(`Download failed with status ${response.statusCode}`));
return;
}
response.pipe(file);
file.on("finish", () => {
file.close();
resolve();
});
});
request.on("error", (err) => {
file.close();
if (fs.existsSync(destPath)) {
fs.unlinkSync(destPath);
}
reject(new Error(`Download error: ${err.message}`));
});
file.on("error", (err) => {
file.close();
if (fs.existsSync(destPath)) {
fs.unlinkSync(destPath);
}
reject(new Error(`File write error: ${err.message}`));
});
});
}
/**
* Removes the temporary file, logging but not throwing on failure.
*/
private cleanupTempFile(filePath: string): void {
try {
if (fs.existsSync(filePath)) {
fs.unlinkSync(filePath);
this.log.info("Cleaned up temporary file: %s", filePath);
}
} catch (err) {
this.log.warn("Failed to clean up temporary file %s: %s", filePath, err);
}
}
private sleep(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms));
}
}

View File

@ -0,0 +1,53 @@
declare module "nrepl-client" {
import type { Socket } from "net";
interface NreplConnection extends Socket {
/**
* Evaluates the given Clojure expression on the nREPL server.
*
* @param code - the Clojure expression to evaluate
* @param callback - called with an error or array of response messages
*/
eval(code: string, callback: (err: Error | null, result: NreplMessage[]) => void): void;
/**
* Sends a raw nREPL message to the server.
*/
send(message: Record<string, unknown>, callback: (err: Error | null, result: NreplMessage[]) => void): void;
/**
* Clones the current session, creating a new session that inherits the current state.
*/
clone(callback: (err: Error | null, result: NreplMessage[]) => void): void;
/**
* Closes the current session.
*/
close(callback: (err: Error | null, result: NreplMessage[]) => void): void;
}
interface NreplMessage {
id?: string;
session?: string;
"new-session"?: string;
ns?: string;
value?: string;
out?: string;
err?: string;
ex?: string;
status?: string[];
}
interface ConnectOptions {
port: number;
host?: string;
}
/**
* Creates a connection to an nREPL server.
*/
function connect(options: ConnectOptions): NreplConnection;
export default { connect };
export type { NreplConnection, NreplMessage, ConnectOptions };
}

16
mcp/pnpm-lock.yaml generated
View File

@ -60,6 +60,9 @@ importers:
js-yaml:
specifier: ^4.1.1
version: 4.1.1
nrepl-client:
specifier: ^0.3.0
version: 0.3.0
penpot-mcp:
specifier: file:..
version: packages@file:packages
@ -881,6 +884,9 @@ packages:
resolution: {integrity: sha512-kNOjDqAh7px0XWNI+4QbzoiR/nTkHAWNud2uvnJquD1/x5a7EQZMJT0AczqK0Qn67oY/TTQ1LbUKajZpp3I9tQ==}
engines: {node: '>=8.0.0'}
bencode@2.0.3:
resolution: {integrity: sha512-D/vrAD4dLVX23NalHwb8dSvsUsxeRPO8Y7ToKA015JQYq69MLDOMkC0uGZYA/MPpltLO8rt8eqFC2j8DxjTZ/w==}
body-parser@2.2.2:
resolution: {integrity: sha512-oP5VkATKlNwcgvxi0vM0p/D3n2C3EReYVX+DNYs5TjZFn/oQt2j+4sVJtSMr18pdRr8wjTcBl6LoV+FUwzPmNA==}
engines: {node: '>=18'}
@ -1221,6 +1227,9 @@ packages:
resolution: {integrity: sha512-8Ofs/AUQh8MaEcrlq5xOX0CQ9ypTF5dl78mjlMNfOK08fzpgTHQRQPBxcPlEtIw0yRpws+Zo/3r+5WRby7u3Gg==}
engines: {node: '>= 0.6'}
nrepl-client@0.3.0:
resolution: {integrity: sha512-EcROXUrzlGHKOdu/E/5WB0OESCI0iGHhdXeYk9cULYtd72eFJrM/Q1umvjTBfKWlT62y76cnyLG/3CmSCqT12w==}
object-assign@4.1.1:
resolution: {integrity: sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg==}
engines: {node: '>=0.10.0'}
@ -2092,6 +2101,8 @@ snapshots:
atomic-sleep@1.0.0: {}
bencode@2.0.3: {}
body-parser@2.2.2:
dependencies:
bytes: 3.1.2
@ -2452,6 +2463,11 @@ snapshots:
negotiator@1.0.0: {}
nrepl-client@0.3.0:
dependencies:
bencode: 2.0.3
tree-kill: 1.2.2
object-assign@4.1.1: {}
object-inspect@1.13.4: {}

6
mcp/scripts/start-mcp-devenv Executable file
View File

@ -0,0 +1,6 @@
#!/bin/sh
# This starts the MCP server in a configuration for Penpot development
# (assuming devenv)
PENPOT_MCP_SERVER_HOST=0.0.0.0 PENPOT_MCP_REMOTE_MODE=true PENPOT_MCP_DEVENV=true pnpm run bootstrap