Neural Cortex Architecture
The Neural Cortex Architecture organizes all 492+ tools as interconnected neurons within cortex regions. Originally introduced in v0.4.0 and extended in v0.5.0 with multi-agent coordination, the cortex enables native AI communication where perception, navigation, creation, modification, inspection, building, scene management, blueprints, media, engineering, and evaluation all work together seamlessly.
How It Works
Instead of calling individual tools in isolation, the Neural Cortex organizes tools into regions that communicate with each other:
AI Client (Claude Code, Codex CLI, Cursor, Windsurf, ...)
|
v
Neural Cortex (CBNeuralCortex.cpp)
|
+-- ai_brain_map -> Discovery (lists every region + every action)
+-- ai_perceive -> Structured perception snapshot
+-- ai_navigate -> Smooth movement, never teleport
+-- ai_create -> Spawn / create anything
+-- ai_modify -> Edit transforms, properties, components
+-- ai_inspect -> Query / inspect anything
+-- ai_build -> Compile, validate, build, save
+-- ai_scene -> Levels, landscape, source control, gameplay
+-- ai_blueprint -> Blueprint graphs, BTs, BlueprintLisp
+-- ai_media -> Audio, animation, sequencer, niagara, metasound
+-- ai_engineer -> Construction workflows (survey/place/verify/audit)
+-- ai_evaluate -> Orbital inspection waypoints
|
v
40+ Manager Classes -> UE5 Editor API
Each region handles a domain of intelligence:
| Region | Purpose | Sample Actions |
|---|---|---|
| ai_brain_map | Discovery — lists every region and every action it accepts | (no params) |
| ai_perceive | See and understand the scene in one call | camera, outliner, proximity, collision, look_at |
| ai_navigate | Move through the editor smoothly | absolute, relative, look_at_actor |
| ai_create | Build and spawn anything | actor, basic_shape, material, mesh, widget, niagara_system, level_sequence... |
| ai_modify | Edit transforms, properties, components | transform, property, attach, mi_scalar, niagara_param, widget_property... |
| ai_inspect | Query and inspect anything | actors, find_actors, blueprint, graph, runtime_actors, perf, output_log... |
| ai_build | Compile, validate, build, save | compile, validate, build_all, full_build, save_all, livecoding... |
| ai_scene | Levels, landscape, source control, gameplay | create_level, sublevels, foliage, source_control, gameplay_tags, nav_path... |
| ai_blueprint | Blueprint graphs, BTs, BlueprintLisp | add_node, connect, create_function, lisp_to_bp, bt_modify_key... |
| ai_media | Audio, animation, sequencer, niagara, metasound | play_sound, anim_notifies, play_sequence, niagara_params, preview_metasound... |
| ai_engineer | Construction workflows | survey, place, verify, audit, demolish, clear |
| ai_evaluate | Orbital inspection waypoints | calculates a tour around a target for multi-angle review |
Perception Region
ClaudusBridge 0.5.0 splits perception into visual (the WebRTC /preview stream, observed by the watcher sub-agent) and structured (engine data without pixels). The MCP server no longer generates PNG frame-view files for agent sight — /preview is the canonical realtime view, and structured perception tools return scene metadata directly as JSON.
Visual Perception — /preview stream
The agent (or its watcher sub-agent) opens http://localhost:3000/preview?bare=1 in a browser tab controlled via Claude in Chrome (or any equivalent browser-automation MCP). Pixel Streaming 2 establishes the WebRTC peer connection automatically (AutoConnect=true&AutoPlayVideo=true) so no user gesture is required.
/preview?bare=1 strips all chrome (header, padding, frame border, HUD chips) so the iframe fills 100vw × 100vh and the stream renders at native resolution.
Structured Perception (ai_perceive)
Combines everything in one call:
curl -X POST http://localhost:3000/mcp \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{
"name":"ai_perceive","arguments":{}}}'
Returns: camera state, look_at target, full outliner, proximity sphere, collision sphere, ground trace, visible actors, and change detection.
Engine Data Stream (ai_vision_stream)
Lightweight engine data — camera transform, visible actors, proximity objects — without pixel capture. Used for continuous tracking between visual snapshots.
Pixel-free Vision (get_vision, scan_360, proximity_alert)
Real-time viewport vision via screen-space raycasting (no screenshots, no images, microsecond-class latency):
get_vision— Builds a depth map with actor identification per region, gap analysis, navigation suggestion, motion detection, blob detection, texture classification.scan_360— 360-degree peripheral scan: 36 rays at 10° intervals, returns distance per direction, blocked status, open directions, escape routes.proximity_alert— 26 rays in a sphere, returns threat_level (safe/alert/DANGER/CRITICAL), per-actor distance, and an evasion vector with ready-to-usenavigate()parameters.
Cognitive Map (get_cognitive_map)
Top-down 2D map built from accumulated get_vision() calls. Each cell stores height, surface type, actor label, exploration status. Supports ASCII-art visualization or structured grid output.
Navigation Region
All navigation uses smooth movement — never teleportation. This ensures professional workflow quality.
| Tool | Use Case |
|---|---|
ai_navigate | Smooth camera movement with absolute, relative, or look_at_actor modes |
move_camera | WASD-style relative movement (forward/right/up) |
orbit_camera | Orbit around a target point or actor |
focus_viewport_on_actor | Center camera on a named actor |
Navigation Protocol
- Perceive current position
- Calculate destination
- Move smoothly (never teleport)
- Perceive again to verify arrival
- Adjust if needed
Creation Region
Routes creation requests to the appropriate manager based on what's being created:
- Actors -> CBActorManager
- Materials -> CBMaterialManager
- Blueprints -> CBBlueprintManager
- Widgets -> CBWidgetManager
- Meshes -> CBMeshManager
Engineering Region
The engineering system provides construction workflow tools:
| Action | Description |
|---|---|
survey | Inspect terrain before building |
place | Place a piece with validation (door width, stair rise, bounds) |
verify | Navigate to and inspect a placed piece |
audit | Structural audit: errors, warnings, piece inventory |
demolish | Remove a specific piece by label |
clear | Remove all pieces matching a prefix |
Engineering Workflow
1. survey -> Understand the terrain
2. place -> Put down a piece (cimiento, pilar, muro, ventana...)
3. verify -> Move to inspect it visually
4. audit -> Check structural integrity
5. Repeat -> Until construction is complete
Inter-Region Communication
Regions share context. When the Engineering region places a piece, the Perception region automatically knows about the new actor. When Navigation moves the camera, Perception updates its spatial awareness.
This eliminates the need for manual state management — the cortex handles coordination internally.
Cortex + Multi-Agent Bus (0.5.0)
Cortex regions and the agent bus compose naturally. A specialist agent subscribed to agent_message on channel delegate.material can be a dedicated material specialist that calls ai_modify { action: "mi_scalar", ... } and ai_build { action: "compile" } while the primary agent stays focused on the user's request. The watcher sub-agent observes the visual result via /preview?bare=1 and confirms or rejects by publishing on verification.confirmed or verification.rejected. See Multi-Agent Coordination for full patterns.
Extending the Cortex
To add a new region:
- Define the region handler in
CBNeuralCortex.cpp - Wire it in
DispatchRegion() - Register the cortex tool in
ClaudusBridgeModule.cpp
The architecture is designed to grow — new regions can be added as the plugin evolves. Existing regions are also extensible: add new actions to a region's switch statement, register them in the schema, and they appear in the next ai_brain_map discovery call.