Skip to main content
Version: 0.5.0

Neural Cortex Architecture

The Neural Cortex Architecture organizes all 492+ tools as interconnected neurons within cortex regions. Originally introduced in v0.4.0 and extended in v0.5.0 with multi-agent coordination, the cortex enables native AI communication where perception, navigation, creation, modification, inspection, building, scene management, blueprints, media, engineering, and evaluation all work together seamlessly.


How It Works

Instead of calling individual tools in isolation, the Neural Cortex organizes tools into regions that communicate with each other:

AI Client (Claude Code, Codex CLI, Cursor, Windsurf, ...)
|
v
Neural Cortex (CBNeuralCortex.cpp)
|
+-- ai_brain_map -> Discovery (lists every region + every action)
+-- ai_perceive -> Structured perception snapshot
+-- ai_navigate -> Smooth movement, never teleport
+-- ai_create -> Spawn / create anything
+-- ai_modify -> Edit transforms, properties, components
+-- ai_inspect -> Query / inspect anything
+-- ai_build -> Compile, validate, build, save
+-- ai_scene -> Levels, landscape, source control, gameplay
+-- ai_blueprint -> Blueprint graphs, BTs, BlueprintLisp
+-- ai_media -> Audio, animation, sequencer, niagara, metasound
+-- ai_engineer -> Construction workflows (survey/place/verify/audit)
+-- ai_evaluate -> Orbital inspection waypoints
|
v
40+ Manager Classes -> UE5 Editor API

Each region handles a domain of intelligence:

RegionPurposeSample Actions
ai_brain_mapDiscovery — lists every region and every action it accepts(no params)
ai_perceiveSee and understand the scene in one callcamera, outliner, proximity, collision, look_at
ai_navigateMove through the editor smoothlyabsolute, relative, look_at_actor
ai_createBuild and spawn anythingactor, basic_shape, material, mesh, widget, niagara_system, level_sequence...
ai_modifyEdit transforms, properties, componentstransform, property, attach, mi_scalar, niagara_param, widget_property...
ai_inspectQuery and inspect anythingactors, find_actors, blueprint, graph, runtime_actors, perf, output_log...
ai_buildCompile, validate, build, savecompile, validate, build_all, full_build, save_all, livecoding...
ai_sceneLevels, landscape, source control, gameplaycreate_level, sublevels, foliage, source_control, gameplay_tags, nav_path...
ai_blueprintBlueprint graphs, BTs, BlueprintLispadd_node, connect, create_function, lisp_to_bp, bt_modify_key...
ai_mediaAudio, animation, sequencer, niagara, metasoundplay_sound, anim_notifies, play_sequence, niagara_params, preview_metasound...
ai_engineerConstruction workflowssurvey, place, verify, audit, demolish, clear
ai_evaluateOrbital inspection waypointscalculates a tour around a target for multi-angle review

Perception Region

ClaudusBridge 0.5.0 splits perception into visual (the WebRTC /preview stream, observed by the watcher sub-agent) and structured (engine data without pixels). The MCP server no longer generates PNG frame-view files for agent sight — /preview is the canonical realtime view, and structured perception tools return scene metadata directly as JSON.

Visual Perception — /preview stream

The agent (or its watcher sub-agent) opens http://localhost:3000/preview?bare=1 in a browser tab controlled via Claude in Chrome (or any equivalent browser-automation MCP). Pixel Streaming 2 establishes the WebRTC peer connection automatically (AutoConnect=true&AutoPlayVideo=true) so no user gesture is required.

/preview?bare=1 strips all chrome (header, padding, frame border, HUD chips) so the iframe fills 100vw × 100vh and the stream renders at native resolution.

Structured Perception (ai_perceive)

Combines everything in one call:

curl -X POST http://localhost:3000/mcp \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{
"name":"ai_perceive","arguments":{}}}'

Returns: camera state, look_at target, full outliner, proximity sphere, collision sphere, ground trace, visible actors, and change detection.

Engine Data Stream (ai_vision_stream)

Lightweight engine data — camera transform, visible actors, proximity objects — without pixel capture. Used for continuous tracking between visual snapshots.

Pixel-free Vision (get_vision, scan_360, proximity_alert)

Real-time viewport vision via screen-space raycasting (no screenshots, no images, microsecond-class latency):

  • get_vision — Builds a depth map with actor identification per region, gap analysis, navigation suggestion, motion detection, blob detection, texture classification.
  • scan_360 — 360-degree peripheral scan: 36 rays at 10° intervals, returns distance per direction, blocked status, open directions, escape routes.
  • proximity_alert — 26 rays in a sphere, returns threat_level (safe/alert/DANGER/CRITICAL), per-actor distance, and an evasion vector with ready-to-use navigate() parameters.

Cognitive Map (get_cognitive_map)

Top-down 2D map built from accumulated get_vision() calls. Each cell stores height, surface type, actor label, exploration status. Supports ASCII-art visualization or structured grid output.


All navigation uses smooth movement — never teleportation. This ensures professional workflow quality.

ToolUse Case
ai_navigateSmooth camera movement with absolute, relative, or look_at_actor modes
move_cameraWASD-style relative movement (forward/right/up)
orbit_cameraOrbit around a target point or actor
focus_viewport_on_actorCenter camera on a named actor
  1. Perceive current position
  2. Calculate destination
  3. Move smoothly (never teleport)
  4. Perceive again to verify arrival
  5. Adjust if needed

Creation Region

Routes creation requests to the appropriate manager based on what's being created:

  • Actors -> CBActorManager
  • Materials -> CBMaterialManager
  • Blueprints -> CBBlueprintManager
  • Widgets -> CBWidgetManager
  • Meshes -> CBMeshManager

Engineering Region

The engineering system provides construction workflow tools:

ActionDescription
surveyInspect terrain before building
placePlace a piece with validation (door width, stair rise, bounds)
verifyNavigate to and inspect a placed piece
auditStructural audit: errors, warnings, piece inventory
demolishRemove a specific piece by label
clearRemove all pieces matching a prefix

Engineering Workflow

1. survey    -> Understand the terrain
2. place -> Put down a piece (cimiento, pilar, muro, ventana...)
3. verify -> Move to inspect it visually
4. audit -> Check structural integrity
5. Repeat -> Until construction is complete

Inter-Region Communication

Regions share context. When the Engineering region places a piece, the Perception region automatically knows about the new actor. When Navigation moves the camera, Perception updates its spatial awareness.

This eliminates the need for manual state management — the cortex handles coordination internally.


Cortex + Multi-Agent Bus (0.5.0)

Cortex regions and the agent bus compose naturally. A specialist agent subscribed to agent_message on channel delegate.material can be a dedicated material specialist that calls ai_modify { action: "mi_scalar", ... } and ai_build { action: "compile" } while the primary agent stays focused on the user's request. The watcher sub-agent observes the visual result via /preview?bare=1 and confirms or rejects by publishing on verification.confirmed or verification.rejected. See Multi-Agent Coordination for full patterns.


Extending the Cortex

To add a new region:

  1. Define the region handler in CBNeuralCortex.cpp
  2. Wire it in DispatchRegion()
  3. Register the cortex tool in ClaudusBridgeModule.cpp

The architecture is designed to grow — new regions can be added as the plugin evolves. Existing regions are also extensible: add new actions to a region's switch statement, register them in the schema, and they appear in the next ai_brain_map discovery call.