Adding an Intent
Intents are the mutations your modality accepts. Every user action, every AI tool call, and every programmatic state change flows through an intent dispatched via Session::dispatch(). This guide walks through adding a RotateElement intent to the whiteboard.
The Intent Lifecycle
Section titled “The Intent Lifecycle”When you dispatch an intent, it flows through this path:
flowchart LR A[Dart / AI] -->|"dispatch(Command::Intent(i))"| B[Session] B --> C[ReduceIntent::reduce] C --> D[M::reduce] D --> E[state mutation + fx] E --> F[lifecycle: recompile, commit, emit]The modality’s static reduce() function receives the intent and mutates state through &mut State. Side effects go through Effects.
Step 1: Add the Variant
Section titled “Step 1: Add the Variant”Add your new variant to the modality’s intent enum:
#[derive(CommandTool)]pub enum WhiteboardIntent { // ... existing variants ...
/// Rotate an element by the given angle in degrees. RotateElement { /// The ID of the placement to rotate. placement_id: String, /// Rotation angle in degrees (0-360). angle: f64, },}#[derive(CommandTool)] generates the AI tool definition from the enum. Each variant becomes a tool the AI agent can call. The doc comments on the variant and its fields become the tool description and parameter descriptions — write them clearly.
Hiding from AI
Section titled “Hiding from AI”If an intent should be dispatchable from Dart but not available to AI agents, mark it with #[tool(hidden)]:
#[derive(CommandTool)]pub enum WhiteboardIntent { // ... other variants ...
/// Internal viewport pan -- not useful for AI. #[tool(hidden)] PanViewport { dx: f64, dy: f64 },}Hidden variants remain fully functional for dispatch. They are only excluded from the tool list that bridge.rs exposes to rig agents.
Step 2: Add the Reduce Arm
Section titled “Step 2: Add the Reduce Arm”Handle the new variant in the modality’s reduce() function:
fn reduce( cmd: Command<WhiteboardIntent, WhiteboardFeedback>, state: &mut WhiteboardState, fx: &mut Effects<ModalityServices, Command<WhiteboardIntent, WhiteboardFeedback>>,) { match cmd { Command::Intent(intent) => match intent { // ... existing arms ...
WhiteboardIntent::RotateElement { placement_id, angle } => { if let Some(placement) = state.synced.placements.iter_mut() .find(|p| p.id == placement_id) { placement.position.rotation = angle;
// Report success to the AI bridge fx.services().agent.tool_output(serde_json::json!({ "status": "ok", "placement_id": placement_id, "rotation": angle, })); } else { fx.services().agent.tool_error( format!("No element with id '{}'", placement_id) ); } } }, Command::Feedback(fb) => { /* handle async feedback */ } }}Key patterns in reduce
Section titled “Key patterns in reduce”Always report back to the AI bridge. When an intent modifies state, call fx.services().agent.tool_output(json!({...})) to report what happened. When it fails, call fx.services().agent.tool_error(msg). The AI agent reads these to understand the result of its tool call. If you forget this, the agent will receive no feedback and may retry or hallucinate.
Mutate synced state for persistent changes. Changes to state.synced trigger a Loro commit after the reduce cycle. Changes to state.ephemeral do not persist.
Using Effects
Section titled “Using Effects”Effects provides three operations inside reduce(). Here is how to use each one.
fx.services() — access injected services
Section titled “fx.services() — access injected services”Services are the I/O projection. They give you access to the AgentService and any domain-specific services your modality defines.
WhiteboardIntent::RotateElement { placement_id, angle } => { // ... mutate state ...
// Report result to the AI agent fx.services().agent.tool_output(serde_json::json!({ "status": "rotated", "placement_id": placement_id, "angle": angle, }));}fx.send() — synchronous follow-up commands
Section titled “fx.send() — synchronous follow-up commands”send() queues a command for immediate processing in the same dispatch() call. It is reduced after the current intent finishes, before lifecycle runs. Use it for state machine transitions:
WhiteboardIntent::DuplicateElement { placement_id } => { if let Some(placement) = state.synced.placements.iter() .find(|p| p.id == placement_id) { let new_placement = ComponentPlacement { id: generate_id(), position: WhiteboardPosition { x: placement.position.x + 20.0, y: placement.position.y + 20.0, ..placement.position.clone() }, ..placement.clone() }; let new_id = new_placement.id.clone(); state.synced.placements.push(new_placement);
// Select the duplicate in the same dispatch cycle fx.send(Command::Intent( WhiteboardIntent::SelectElement { placement_id: new_id } )); }}The send loop processes all queued commands sequentially, without recursion. Sends can produce more sends — the loop continues until the channel is empty.
fx.spawn() — async work with feedback
Section titled “fx.spawn() — async work with feedback”spawn() runs an async closure on the thread pool. The closure receives Arc<Services> and a CommandSender for sending feedback back to the next dispatch() call.
First, define a feedback variant:
pub enum WhiteboardFeedback { // ... existing variants ... ImageLoaded { placement_id: String, image_data: Vec<u8>, }, ImageLoadFailed { placement_id: String, error: String, },}Then spawn the async work:
WhiteboardIntent::LoadImage { placement_id, url } => { // Mark as loading in ephemeral state state.ephemeral.loading_images.insert(placement_id.clone());
fx.spawn(move |_svc, sender| { // This runs on a thread pool thread match fetch_image(&url) { Ok(data) => { sender.send(Command::Feedback( WhiteboardFeedback::ImageLoaded { placement_id, image_data: data, } )); } Err(e) => { sender.send(Command::Feedback( WhiteboardFeedback::ImageLoadFailed { placement_id, error: e.to_string(), } )); } } });}Handle the feedback in the Feedback arm:
Command::Feedback(fb) => match fb { WhiteboardFeedback::ImageLoaded { placement_id, image_data } => { state.ephemeral.loading_images.remove(&placement_id); if let Some(p) = state.synced.placements.iter_mut() .find(|p| p.id == placement_id) { p.bindings.insert( "image_data".to_string(), PropertyValue::Bytes(image_data), ); } } WhiteboardFeedback::ImageLoadFailed { placement_id, error } => { state.ephemeral.loading_images.remove(&placement_id); state.ephemeral.last_error = Some(error); }}Feedback arrives on the next dispatch() call via the receiver channel. It is drained at the top of dispatch() before the new command is processed.
The Two Paths
Section titled “The Two Paths”flowchart TD R[reduce] -->|"fx.send()"| S[Send Channel] R -->|"fx.spawn()"| SP[Thread Pool] S -->|"same dispatch()"| R SP -->|"cmd_rx, next dispatch()"| D[drain at top of dispatch] D --> R
style S fill:#e0f0ff style SP fill:#fff0e0| Method | When Processed | Use Case |
|---|---|---|
fx.send() | Same dispatch() call, after current reduce | Follow-up state changes, state machine transitions |
fx.spawn() | Next dispatch() call, via feedback channel | I/O, network requests, expensive computation |
Testing Intents
Section titled “Testing Intents”Test reduce logic in isolation since reduce() is a static function:
#[test]fn test_rotate_element() { let mut state = WhiteboardState { synced: WhiteboardSynced { placements: vec![ComponentPlacement { id: "elem-1".to_string(), component_key: "shape".to_string(), component_id: None, position: WhiteboardPosition { x: 0.0, y: 0.0, width: 100.0, height: 100.0, rotation: 0.0, z_index: 0, }, bindings: HashMap::new(), }], ..Default::default() }, ephemeral: Default::default(), };
let agent = AgentService::noop(); let services = Arc::new(ModalityServices::new(agent)); let (send_tx, _send_rx) = mpsc::channel(); let (spawn_tx, _spawn_rx) = mpsc::channel(); let mut fx = Effects::new(services, send_tx, spawn_tx);
Whiteboard::reduce( Command::Intent(WhiteboardIntent::RotateElement { placement_id: "elem-1".to_string(), angle: 45.0, }), &mut state, &mut fx, );
assert_eq!(state.synced.placements[0].position.rotation, 45.0);}AgentService::noop() creates a dummy agent service for tests — tool outputs are silently discarded.
Next Steps
Section titled “Next Steps”- Wiring AI — make your intents work as AI tools with validation
ReducerReference — full trait specification and dispatch lifecycleEffectsReference — detailed effects API