Skip to main content
Version: 0.5.0

Multi-Agent Coordination

ClaudusBridge 0.5.0 introduces first-class multi-agent coordination. Per-session event subscriptions, an agent_message bus, an always-on internal watcher, and a mandatory canonical visual watcher sub-agent compose into a coherent system where multiple AI clients can work in parallel on the same Unreal session without stepping on each other — and where every tool call is verified visually instead of trusted blindly.

This guide documents the building blocks and the conventional patterns. The plugin does not enforce these patterns; they are Schelling points so workflows compose without prior coordination between agents.


Building Block 1 — Per-Session Event Queue

Every event the editor fires (selection changes, actor spawns/deletions, blueprint compiles, asset saves, PIE start/stop, camera moves, asset registry changes) is fanned out at insert time to every session subscribed to that event type. Each session has its own private queue.

# Claude Code session
subscribe_events { session: "claude-code", event_types: ["actor_spawned","selection_changed"] }
poll_editor_events { session: "claude-code", limit: 50 }

# Codex CLI session, in parallel
subscribe_events { session: "codex-cli", event_types: ["blueprint_compiled"] }
poll_editor_events { session: "codex-cli", limit: 50 }

Both sessions run concurrently. When the editor spawns an actor, the event lands in claude-code's queue but not in codex-cli's (codex-cli didn't subscribe to actor_spawned). When a Blueprint compiles, only codex-cli sees it. Subscriptions act as a filter at insert time, not at poll time — there is no retroactive mode. Pick the types you care about up front.

Omitting session falls back to a shared "global" bucket — fine for single-agent setups, but use distinct names when multiple agents are connected.

Convention: use the same string for clientInfo.name in initialize, session in subscribe / poll, and from_session in publish_agent_event. That makes logs, dashboards, and event payloads all line up under one name.


Building Block 2 — publish_agent_event (the agent bus)

The mirror of editor → agent events is publish_agent_event — agents talking to each other. It pushes a custom event of type agent_message into the same per-session queue that delivers editor events.

publish_agent_event {
from_session: "watcher",
target_session: "primary", // omit for broadcast
channel: "watcher.observation",
payload: {
kind: "actor_modified",
actor_path: "/Game/.../BP_Boat",
observation: "User dragged actor to (1200, -450, 0)"
}
}

Receiver side — same poll, filter by event type and channel:

subscribe_events { session: "primary", event_types: ["agent_message", "actor_spawned", ...] }
# ... do work ...
events = poll_editor_events { session: "primary", limit: 50 }
for ev in events:
if ev.type == "agent_message" and ev.data.channel == "watcher.observation":
# react to watcher's report

Targeted delivery (target_session set) sends to one recipient. Broadcast (target_session omitted) sends to every session subscribed to agent_message except the sender's from_session (no echo).


Building Block 3 — Always-on internal watcher

ClaudusBridge ships with an in-process editor watcher written in C++ (FCBInternalWatcher). It starts the moment the plugin loads — no MCP client connection required, no external Python process required, no user setup. From that point on, every editor event the plugin sees is also inspected by the internal watcher's pattern rules:

  • blueprint_compile_loop — 5+ blueprint_compiled events in 10s.
  • pie_immediate_stop — PIE stopped < 5s after starting (likely crash).
  • orphan_actor_after_pie — net actor leak > 5 across a PIE session.

When a rule fires, the watcher calls RecordFeedbackInternal directly, which appends to Saved/ClaudusBridge/feedback.jsonl AND broadcasts on the agent_message bus on channel feedback.<kind>. Any agent subscribed to agent_message will see the entry as from_session="plugin-internal-watcher". The persistent JSONL accumulates across editor sessions and is the canonical plugin-improvement signal.


Building Block 4 — Mandatory canonical visual watcher sub-agent

Every connecting AI client must spawn a visual watcher sub-agent using the canonical prompt at Saved/ClaudusBridge/watcher_subagent.prompt.md. The prompt is auto-generated on every MCP initialize so it always matches the plugin version.

The watcher:

  1. Opens /preview?bare=1 through Claude in Chrome (or equivalent browser-automation MCP)
  2. Subscribes to ClaudusBridge events as session visual-watcher
  3. Polls events every ~12 seconds, takes screenshots when notable events arrive
  4. Compares the screenshot with the previous baseline (panels, tabs, asset selection, viewport content, status bar, error highlights, compile state)
  5. Publishes observations on the agent_message bus on channel watcher.observation targeted at primary

The primary agent drains the queue between user messages and uses the watcher's summaries as visual ground truth. This closes the verification gap between "tool returned success" and "the visual reality matches the intent".

For session-long persistence beyond chat-runtime sub-agent timeouts, also run the Python daemon Saved/ClaudusBridge/claudusbridge_watcher.py — it covers event-pattern anomalies; the sub-agent covers visual.


Building Block 5 — record_feedback and the self-improvement loop

Every feedback.* channel published via record_feedback does two things:

  1. Persists a JSON line to Saved/ClaudusBridge/feedback.jsonl (UTF-8, append-only, accumulates across editor sessions).
  2. Broadcasts an agent_message on channel feedback.<kind> so live agents (e.g. a triage agent watching for repeated bugs) can react.

Call record_feedback liberally. The cost is negligible and the signal is gold. Anything you noticed — a tool that worked but had unexpected friction, a tool you wished existed, a visual that didn't match the tool response, an obvious bug — should be logged. The accumulated JSONL is reviewed regularly and turned into new tools, fixes, and schema improvements. The plugin gets better automatically as agents use it.

record_feedback {
from_session: "watcher",
kind: "visual_mismatch",
description: "primary called set_material_parameter on M_Boat, tool returned success,
but the boat in the viewport is using its parent material instance with
the old color.",
context: {
tool_call: "set_material_parameter",
tool_args: { name: "BaseColor", value: [0.2, 0.5, 0.8] },
related_actor: "BP_Boat_C_42",
screenshot_path: "Saved/ClaudusBridge/Vision/latest.png",
observation: "boat color is still red, not blue"
}
}

Patterns

1. Watcher pattern — one observer, one actor

session="primary":
Subscribes to [agent_message, blueprint_compiled, ...]
Drives the user's request via MCP tool calls.

session="watcher":
Opens /preview?bare=1 in a separate browser tab.
Subscribes to [actor_spawned, selection_changed, pie_started, asset_saved].
Periodically takes screenshots, reads the editor outliner / details panel,
publishes findings on channel='watcher.observation' to target='primary'.

The primary agent stays focused on the user request and receives a curated stream of observations from the watcher without polling state itself.

2. Specialist pattern — split work by domain

session="blueprint-bot":  subscribes to [blueprint_compiled, agent_message]
session="material-bot": subscribes to [asset_saved, agent_message]
session="landscape-bot": subscribes to [actor_spawned, agent_message]
session="primary": delegates by publishing channel='delegate.<domain>'
with target_session set to the specialist.

Each specialist works in its own MCP session, so a heavy material recompile doesn't block primary's user-facing latency.

3. Pair-programming pattern — driver + reviewer

session="driver":
Makes the changes via MCP.

session="reviewer":
Subscribes to [actor_spawned, blueprint_compiled, agent_message],
publishes channel='review.feedback' on issues.

Driver subscribes to its own targeted agent_message and responds to reviewer feedback.

4. Recorder pattern — passive log of everything

session="recorder":
Subscribes to ['all'], polls with clear=true,
appends each event to Saved/ClaudusBridge/sessions/<id>.jsonl.

Useful for debugging, replay, or post-mortem of a long task.

5. Sentinel pattern — alert on specific signals

session="sentinel":
Subscribes to [pie_stopped, blueprint_compiled, asset_saved],
broadcasts agent_message channel='alert.*' when it detects regressions
(e.g. PIE crashed within 10s of start).
Other agents subscribed to agent_message react.

Canonical channel namespaces

These are conventions every agent should adopt so workflows compose without prior coordination. The plugin does not enforce them — they are just the Schelling points.

NamespacePurposeTypical sender
watcher.observationCurated visual report of editor state changeswatcher
verification.neededPrimary asks any watcher to confirm a recent actionprimary
verification.confirmedWatcher confirms reality matches intentwatcher
verification.rejectedWatcher detected a mismatch (also call record_feedback)watcher
task.progressSpecialist reports progress on a delegated taskspecialist
task.completeSpecialist signals completion + result summaryspecialist
alert.errorSentinel raises an error that needs attentionsentinel
alert.warningSentinel raises a non-blocking concernsentinel
handoff.requestAgent asks to pass work to another agentany
handoff.acceptReceiver acknowledges and takes the taskany
delegate.<domain>Primary asks a specialist to handle a domain taskprimary
review.feedbackReviewer comments on driver's workreviewer
feedback.tool_frictionA tool worked but workflow was unexpectedly painfulany
feedback.visual_mismatchTool reported success, viewport disagreeswatcher
feedback.suggest_toolAn agent needed a tool that does not exist yetany
feedback.plugin_bugSomething is clearly broken (crash, error)any
feedback.observationNoteworthy pattern that informs future improvementsany

13 Editor Event Types

Use these exact snake_case strings in subscribe_events:

Event typeFires when
selection_changedSelected actor(s) changed in editor — useful for noticing the human user (or another agent) clicked something
actor_spawnedNew actor placed in the level — this is how you notice other MCP agents calling spawn_actor
actor_deletedActor removed from the level
level_changedMap / sublevel opened or closed
asset_savedAny package saved to disk
asset_importedNew asset finished importing
asset_added / asset_removed / asset_renamedContent Browser asset registry change
blueprint_compiledAny Blueprint just finished compiling — invalidate your cached schema
pie_started / pie_stoppedPlay-In-Editor lifecycle
camera_changedEditor camera moved (debounced, fires after the user stops dragging)
agent_messageCustom message published via publish_agent_event

Anti-pattern — busy-polling state

Don't:

while True:
actors = list_actors()
if actors != last:
# ...
sleep(1)

Do:

subscribe_events(["all"])               # once per session
# ... do other work ...
events = poll_editor_events(limit=50) # between operations
for ev in events:
if ev.type == "actor_spawned": ...
if ev.type == "blueprint_compiled": invalidate_cache(ev.data)
if ev.type == "agent_message" and ev.data.channel == "watcher.observation":
react_to_visual_observation(ev.data.payload)

Startup flood guard

Events fired during the first 5 seconds after editor launch are dropped. If you connect immediately on startup and see no events for actions you expected, wait a few seconds and retry — they weren't lost because of polling, they were never queued.