Subagents
Subagents are rig agents that handle the agentic work at each level of the modality tree. They are spawned by the orchestrator and run independently with task-specific prompts. Built on rig (rig-core v0.29.0).
Design principles
Section titled “Design principles”These principles follow industry consensus (Claude Code, OpenCode, LangGraph):
- Task-specific prompt — a child gets a curated task prompt, not parent conversation history
- Parent curates — the parent builds context via the
Describechain and includes relevant information in the task prompt - Child returns summary — the child produces a summary result for the parent, not raw state
- Flat over deep — one orchestrator per modality level, recursive. Subagents are single-level; nesting is handled by the orchestrator, not by agents calling agents
run_subagent()
Section titled “run_subagent()”pub async fn run_subagent<M: CompletionModel + 'static>( model: M, preamble: &str, mut tools: Vec<Box<dyn ToolDyn>>, validation_ctx: Arc<ValidationContext>, task: &str, max_turns: usize,) -> Result<String, SubagentError> { ... }The function injects done and validate tools automatically, builds a rig agent, and runs multi-turn streaming until the agent signals completion or exhausts max_turns.
Validation loop
Section titled “Validation loop”Every subagent has a built-in validation loop controlled by two injected tools:
The agent calls done when it believes it has finished. This triggers Validate::validate() via CEL evaluation on all properties:
- Pass (no errors) — sets a done signal, agent stops
- Fail (errors returned) — errors are sent back as the tool result in the same conversation; the agent fixes its work and calls
doneagain
// DoneTool::call():let errors = (self.ctx.validate_fn)();if errors.is_empty() { self.ctx.done_signal.store(true, Ordering::Relaxed); Ok(json!({"ok": true, "message": "Validation passed. Done."}))} else { Ok(json!({ "ok": false, "errors": errors, "message": "Validation failed. Fix the errors and call done again." }))}validate
Section titled “validate”The agent can call validate at any time to check its work proactively before signaling done. Same validation logic, just agent-initiated rather than gate-initiated. This avoids wasting a done attempt on known failures.
// ValidateTool::call():let errors = (self.ctx.validate_fn)();if errors.is_empty() { Ok(json!({"ok": true, "message": "All validation rules pass."}))} else { Ok(json!({"ok": false, "errors": errors, "message": "Validation errors found."}))}sequenceDiagram participant A as Subagent participant T as Domain Tools participant V as validate tool participant D as done tool
A->>T: call tools to fill values A->>T: call more tools A->>V: validate (optional check) V-->>A: {ok: false, errors: [...]} A->>T: fix errors via tools A->>D: done D-->>A: {ok: false, errors: [...]} A->>T: fix remaining errors A->>D: done D-->>A: {ok: true} -- agent stopsThe conversation is single and continuous — there is no restart on failure. The agent accumulates context from each attempt, making successive fixes cheaper.
Validation context
Section titled “Validation context”ValidationContext captures a closure over the live session state. It pairs Component::properties() schemas with HasValues::values() from synced state, builds Property structs, and runs Validate::validate() on each:
pub struct ValidationContext { validate_fn: Arc<dyn Fn() -> Vec<ValidationToolError> + Send + Sync>, done_signal: Arc<AtomicBool>,}The validate closure locks the session to read current state, so validation always runs against the latest values the agent has written.
Metadata subagent
Section titled “Metadata subagent”Metadata is its own agent — it implements Agent (preamble + tools), Validate, and Pending. It has its own tool calls for setting metadata fields (canvas size, topic, grade level, etc.).
When metadata is pending, it runs as a subagent with its own validation loop, independent of the node subagent. Metadata is usually pre-filled from onboarding, so this agent often gets skipped.
Guideline rules are never gated — they appear in the preamble via Describe but are excluded from the done/validate checks.
Two AI flows
Section titled “Two AI flows”Two AI flows coexist in the system, both dispatching through Session:
| Flow | Entry point | Purpose |
|---|---|---|
| Interactive chat | runner.rs / chat_stream() | User edits via the chat UI, multi-turn with history |
| Headless generation | subagent.rs / run_subagent() | Orchestrated top-down generation, task-specific prompt |
Both use the same dispatch path: Session::dispatch() with the bridge adapting intents to rig tools. The difference is conversation management — chat maintains history, subagents start fresh with a curated task prompt.
Guideline injection
Section titled “Guideline injection”When validation is enabled, Guideline rules from all properties are collected and appended to the system prompt as a guidelines section:
fn collect_guidelines(session: &Session<M, I>) -> String { // Collect Guideline(text) variants from all property rules // Format as: // ## Guidelines // - rule text 1 // - rule text 2}This ensures the agent follows soft constraints while generating, even though they are never hard-gated.
Related
Section titled “Related”- Orchestrator — spawns subagents at each tree level
- AI Traits —
Agent,Validate,Pendingtraits used by subagents - Bridge — how
CommandToolvariants become rig tools - CEL Environment — evaluator used by the validation loop