Wiring AI
AI generation in the modality library is built on four traits. Each trait has a focused responsibility, and together they let an AI agent fill out a modality’s content through tool calls, self-correct against validation rules, and know when work remains.
This guide walks through making a whiteboard component fully AI-generatable.
Architecture Overview
Section titled “Architecture Overview”flowchart TD O[Orchestrator] -->|"spawns"| SA[Subagent] SA -->|"tool calls"| B[bridge.rs] B -->|"CommandTool → rig::Tool"| S[Session::dispatch] S -->|"reduce → recompile → emit"| ST[State] SA -->|"calls done"| V[Validate::validate] V -->|"pass"| DONE[Complete] V -->|"fail → errors as tool result"| SAThe orchestrator is deterministic code on the session — it controls execution order (metadata, then node, then children in parallel). Subagents are rig agents with task-specific prompts. The orchestrator spawns subagents; subagents never spawn each other.
The Four AI Traits
Section titled “The Four AI Traits”| Trait | Purpose | Key Method |
|---|---|---|
Describe | Self-description as text | describe() -> String |
Agent | Agent configuration | preamble() -> String, tools() -> Vec<ToolDyn> |
Validate | Validation rules | rules() -> Vec<ValidationRule>, validate(cel_eval) |
Pending | Completeness check | is_pending() -> bool |
Step 1: Implement Describe
Section titled “Step 1: Implement Describe”Describe is the universal self-description trait. Every type that participates in AI describes itself as text. The orchestrator concatenates ancestor descriptions to build context for subagents.
use modality_core::ai::Describe;
impl Describe for BadgeComponent { fn describe(&self) -> String { "Badge: renders a colored label with configurable text and background color. \ Used for status indicators, tags, and short annotations." .to_string() }}Descriptions should be self-contained — a component describes only itself, never its parents. The orchestrator handles context assembly:
// The orchestrator builds context like this:let context = format!( "{}\n{}", ancestor_context, // "Whiteboard: canvas with positioned components" self.describe(), // "Badge: renders a colored label...");What needs Describe
Section titled “What needs Describe”Every type the AI system touches implements Describe:
// Property types describe their constraintsimpl Describe for PropertyType { fn describe(&self) -> String { match self { PropertyType::Text { max_length: Some(n) } => format!("text (max {} chars)", n), PropertyType::Select { options } => format!("one of {}", options.iter() .map(|o| o.label.as_str()).collect::<Vec<_>>().join(", ")), PropertyType::Number { min, max } => format!("number ({}-{})", min.map(|n| n.to_string()).unwrap_or("..".into()), max.map(|n| n.to_string()).unwrap_or("..".into())), // ... } }}
// Properties describe their current stateimpl Describe for Property { fn describe(&self) -> String { format!("{}: {} ({})", self.schema.key, self.value.describe(), self.schema.property_type.describe(), ) }}
// Positions describe spatial arrangementimpl Describe for WhiteboardPosition { fn describe(&self) -> String { format!("x:{} y:{} w:{} h:{}", self.x, self.y, self.width, self.height) }}
// Placements describe what is whereimpl Describe for ComponentPlacement<WhiteboardPosition> { fn describe(&self) -> String { format!("{} at ({})", self.component_key, self.position.describe()) }}Step 2: Implement Validate
Section titled “Step 2: Implement Validate”Validate defines rules that the AI agent must satisfy. There are three kinds of rules:
pub enum ValidationRule { /// CEL expression evaluated to bool. Hard gate. Expression(String), /// Natural language guidance. Included in preamble only, never checked. Judgement(String), /// Rust function. Hard gate. Not AI-generatable. Function(fn(&PropertyValue) -> bool),}Implement Validate for your component:
use modality_core::ai::{Validate, ValidationRule, ValidationError};
impl Validate for BadgeComponent { fn rules(&self) -> Vec<ValidationRule> { vec![ // CEL expression -- hard gate, evaluated at done time ValidationRule::Expression( "size(props.label) > 0 && size(props.label) <= 50".to_string() ),
// Judgement -- natural language, preamble only ValidationRule::Judgement( "Label text should be concise and descriptive, \ suitable for a status indicator or tag.".to_string() ),
// Function -- hard gate, Rust-native check ValidationRule::Function(|val| { if let PropertyValue::Text(t) = val { !t.contains('\n') // no newlines in badges } else { true } }), ] }
// validate() has a default implementation that evaluates // Expression and Function rules. Judgement rules are skipped.}How validation works in the agent loop
Section titled “How validation works in the agent loop”- The agent uses tools to fill property values.
- The agent calls the
donetool to signal completion. Validate::validate()runs allExpressionandFunctionrules.- If all pass, the agent stops successfully.
- If any fail, the errors are returned as the
donetool’s result. The agent reads the errors and fixes its work. - The agent calls
doneagain. This loop continues until validation passes.
The agent can also call validate proactively to check its work before signaling done.
CEL environment
Section titled “CEL environment”CEL expressions have access to three variables:
metadata -- this component's metadata (e.g., Rect for whiteboard components)props -- all property values, including unset ones as nullvalue -- current property's value (useful in per-property expressions)// Example CEL expressions:"size(props.label) <= 50" // string length check"props.font_size >= 8.0 && props.font_size <= 72.0" // range check"metadata.width >= 100.0" // metadata constraint"has(props.color)" // required field checkStep 3: Implement Pending
Section titled “Step 3: Implement Pending”Pending tells the orchestrator whether this component still needs AI generation:
use modality_core::ai::Pending;
impl Pending for BadgeComponent { fn is_pending(&self) -> bool { // A badge is pending if its required "label" property has no value self.properties().iter() .filter(|p| p.required) .any(|p| p.default_value.is_none()) }}In practice, is_pending() checks whether required properties lack values. Optional properties with defaults do not trigger generation, but may get filled during the same agent call.
The orchestrator uses is_pending() to decide whether to spawn a subagent:
// Inside the orchestrator (deterministic code, not an agent):if metadata.is_pending() { run_metadata_subagent(&context).await;}if node.is_pending() { run_node_subagent(&context).await;}for child in children { if child.is_pending() { // children run in parallel run_child_subagent(&child_context).await; }}Step 4: Implement Agent
Section titled “Step 4: Implement Agent”Agent extends Describe and configures the rig agent for this context. It provides a system prompt (preamble) and the tool set:
use modality_core::ai::{Agent, Describe};use rig::tool::ToolDyn;
/// Agent context for whiteboard-level generation.pub struct WhiteboardAgentContext { pub properties: Vec<Property>, pub placements: Vec<ComponentPlacement<WhiteboardPosition>>, pub metadata: WhiteboardMetadata,}
impl Describe for WhiteboardAgentContext { fn describe(&self) -> String { let mut desc = format!( "Whiteboard: canvas {}x{} with {} components.\n", self.metadata.width, self.metadata.height, self.placements.len(), ); desc.push_str("Components:\n"); for p in &self.placements { desc.push_str(&format!(" - {}\n", p.describe())); } desc }}
impl Agent for WhiteboardAgentContext { fn preamble(&self) -> String { let mut preamble = format!( "You are a whiteboard content generator.\n\n\ ## Context\n{}\n\n\ ## Properties\n", self.describe(), );
// Include property descriptions for prop in &self.properties { preamble.push_str(&format!("- {}\n", prop.describe())); }
// Include validation rules via Describe preamble.push_str("\n## Rules\n"); for rule in self.rules() { preamble.push_str(&format!("- {}\n", rule.describe())); }
preamble }
fn tools(&self) -> Vec<Box<dyn ToolDyn>> { // Tools are generated from the intent enum via CommandTool. // The bridge adapter converts these to rig tools. vec![] // bridge.rs supplies the tools from M::Intent: CommandTool }}How bridge.rs Connects Everything
Section titled “How bridge.rs Connects Everything”bridge.rs is a thin adapter that converts CommandTool intent variants into rig Tool implementations. When the AI agent calls a tool:
sequenceDiagram participant Agent as Rig Agent participant Bridge as bridge.rs participant Session as Session::dispatch participant State as Modality State
Agent->>Bridge: tool_call("rotate_element", {id, angle}) Bridge->>Bridge: M::Intent::from_tool_call("rotate_element", args) Bridge->>Session: dispatch(Command::Intent(SessionIntent::Modality(intent))) Session->>State: M::reduce(cmd, state, fx) State-->>Session: state mutated, fx.services().agent.tool_output(...) Session->>Session: drain_agent_events() Session-->>Bridge: Vec<AgentEvent> Bridge-->>Agent: tool result (last ToolOutput or ToolError)The key line is in bridge.rs:
// bridge.rs -- SessionTool::call():let intent = M::Intent::from_tool_call(&tool_name, args)?;session.dispatch(Command::Intent(SessionIntent::Modality(intent)));let events = session.drain_agent_events();// Last ToolOutput/ToolError from events is the resultAgentService
Section titled “AgentService”The AgentService is the write-only channel that reduce functions use to communicate results to the AI agent:
pub struct AgentService { tx: mpsc::Sender<AgentEvent>,}
pub enum AgentEvent { ToolOutput(serde_json::Value), ToolError(String),}
impl AgentService { pub fn tool_output(&self, val: impl Serialize); pub fn tool_error(&self, msg: impl Into<String>); pub fn noop() -> Arc<Self>; // for tests -- silently discards}Session creates the AgentService, passes the Arc to the modality’s services, and owns the receiver. After dispatch, session.drain_agent_events() collects all events produced during that reduce cycle.
Reporting from reduce
Section titled “Reporting from reduce”Always report back from your reduce arms:
WhiteboardIntent::RotateElement { placement_id, angle } => { if let Some(p) = state.synced.placements.iter_mut() .find(|p| p.id == placement_id) { p.position.rotation = angle; fx.services().agent.tool_output(serde_json::json!({ "status": "rotated", "placement_id": placement_id, "new_angle": angle, })); } else { fx.services().agent.tool_error( format!("Element '{}' not found", placement_id) ); }}The Orchestrator
Section titled “The Orchestrator”The orchestrator is deterministic code on the session. It runs generation top-down:
- Metadata subagent — fills modality-level metadata if pending.
- Node subagent — fills the modality’s own properties and placements if pending.
- Children — recurses into each child, in parallel.
impl<I: ReduceIntent<Whiteboard>> Session<Whiteboard, I> { async fn orchestrate(&self, ancestor_context: String) { let context = format!("{}\n{}", ancestor_context, self.describe());
// 1. Metadata subagent if pending if self.metadata_agent_context().is_pending() { run_subagent( model.clone(), &self.metadata_agent_context().preamble(), self.metadata_agent_context().tools(), "Fill the whiteboard metadata.", 10, // max_turns ).await; }
// 2. Node subagent if pending if self.node_agent_context().is_pending() { run_subagent( model.clone(), &self.node_agent_context().preamble(), self.node_agent_context().tools(), "Fill the whiteboard content.", 20, ).await; }
// 3. Children in parallel (each recurses) let futures: Vec<_> = self.children() .iter() .filter(|(_, slot)| slot.is_hot()) .map(|(id, slot)| { let child_context = context.clone(); async move { slot.as_hot().unwrap().orchestrate(child_context).await; } }) .collect(); futures::future::join_all(futures).await; }}Context flows down, never up
Section titled “Context flows down, never up”Each level adds its own describe() output to the context string. Children receive lineage context (“I’m a whiteboard inside slide 3 of a photosynthesis lesson”) but never receive parent conversation history. Subagents get a curated task prompt, not the parent’s chat log.
Property bindings (PropertyBinding::Template) resolve parent values into child properties before the child runs. The child sees bound values as its own properties — no need to pass parent data in the context.
Subagent Loop
Section titled “Subagent Loop”The subagent runs multi-turn until done passes validation:
flowchart TD START[Subagent starts] --> TOOLS[Use tools to fill values] TOOLS --> CHECK{Call done or validate} CHECK -->|validate| VAL[Run Validate::validate] VAL -->|pass| INFO[Return OK to agent] INFO --> TOOLS VAL -->|fail| ERR[Return errors to agent] ERR --> TOOLS CHECK -->|done| GATE[Run Validate::validate] GATE -->|pass| SUCCESS[Agent stops] GATE -->|fail| ERRGATE[Return errors to agent] ERRGATE --> TOOLSThe agent can call validate at any time to check its progress. done is the final gate that must pass all rules.
Trait Coverage Reference
Section titled “Trait Coverage Reference”| Type | Describe | Agent | Validate | Pending |
|---|---|---|---|---|
| PropertyType | yes | |||
| Property | yes | yes | yes | |
| PropertySchema | yes | |||
| PropertyValue | yes | |||
| ValidationRule | yes | |||
| ComponentPlacement | yes | |||
| Position types | yes | |||
| Metadata (per-modality) | yes | yes | yes | yes |
| Component | yes | yes | yes | yes |
| Modality | yes | yes | yes | yes |
Next Steps
Section titled “Next Steps”- Parent-Child Modalities — wire AI generation across parent/child boundaries
- Adding an Intent — the intents that AI tools call
- Session Reference — full session lifecycle details