Skip to content

Pipeline Schema Reference

Pipeline YAML files define multi-step AI workflows. Store pipelines in .wave/pipelines/.

Minimal Pipeline

yaml
kind: WavePipeline
metadata:
  name: simple-task
steps:
  - id: execute
    persona: craftsman
    exec:
      type: prompt
      source: "Execute: {{ input }}"

Copy this to .wave/pipelines/simple-task.yaml and run with wave run simple-task "your task".


Complete Example

yaml
kind: WavePipeline
metadata:
  name: ops-pr-review
  description: "Automated code review pipeline"
  category: ops

input:
  source: cli

hooks:
  - name: notify-start
    event: run_start
    type: command
    command: "echo 'Pipeline started'"

pipeline_outputs:
  review_url:
    step: publish
    artifact: result
    field: ".pr_url"

chat_context:
  artifact_summaries: [findings]
  suggested_questions:
    - "What security issues were found?"
  focus_areas: [security, performance]

steps:
  - id: analyze
    persona: navigator
    model: balanced
    memory:
      strategy: fresh
    workspace:
      type: worktree
      branch: "{{ pipeline_id }}"
    exec:
      type: prompt
      source: "Analyze the codebase for: {{ input }}"
    output_artifacts:
      - name: analysis
        path: .wave/output/analysis.json
        type: json
    handover:
      contract:
        type: json_schema
        schema_path: .wave/contracts/analysis.schema.json
        source: .wave/output/analysis.json

  - id: review
    persona: auditor
    dependencies: [analyze]
    thread: review-thread
    fidelity: compact
    contexts: [security, api]
    memory:
      strategy: fresh
      inject_artifacts:
        - step: analyze
          artifact: analysis
          as: context
    exec:
      type: prompt
      source: "Review the code for security issues."
    output_artifacts:
      - name: findings
        path: .wave/output/findings.md
        type: markdown
    outcomes:
      - type: pr
        extract_from: output/findings.json
        json_path: ".pr_url"
        label: "Review PR"
    handover:
      contract:
        type: test_suite
        command: "go vet ./..."

Top-Level Fields

FieldRequiredDefaultDescription
kindyes-Must be WavePipeline
metadata.nameyes-Pipeline identifier
metadata.descriptionno""Human-readable description
metadata.categoryno""Pipeline category (e.g., impl, audit, ops)
metadata.releasenofalseWhether this pipeline is a released (stable) pipeline
metadata.disablednofalseDisable the pipeline without deleting it
input.sourcenocliInput source: cli, file, stdin
input.pathno-File path when source: file
input.schemano-Input schema for validation
input.exampleno-Example input for documentation
input.label_filterno-Label filter for issue-based inputs
input.batch_sizeno-Batch size for multi-item inputs
stepsyes-Array of step definitions
hooksno[]Lifecycle hooks triggered on pipeline events
pipeline_outputsno{}Named output aliases for composability
chat_contextno-Post-pipeline chat session configuration
skillsno[]Declarative skill references
requiresno-Pipeline dependency declarations
max_step_visitsno50Graph-level limit on total step visits

Step Fields

FieldRequiredDefaultDescription
idyes-Unique step identifier
personaconditional-Persona from wave.yaml (required for prompt steps)
adapterno-Step-level adapter override (e.g., codex, gemini)
modelno-Step-level model tier or name (e.g., balanced, strongest, claude-haiku-4-5)
exec.typeconditional-prompt, command, or slash_command
exec.sourceconditional-Prompt template or shell command
exec.source_pathno-Path to a prompt file (alternative to inline source)
dependenciesno[]Step IDs that must complete first
timeout_minutesno-Step-level timeout in minutes
optionalnofalseIf true, step failure does not block the pipeline
memory.strategynofreshMemory strategy (always fresh)
memory.inject_artifactsno[]Artifacts from prior steps
workspace.typeno-worktree for git worktree workspaces
workspace.branchnoautoBranch name for worktree (supports templates)
workspace.mountno[]Source mounts (alternative to worktree)
workspace.refno-Reference another step's workspace (shared worktree)
output_artifactsno[]Files produced by this step
outcomesno[]Structured results to extract from artifacts
handover.contractno-Output validation
handover.contractsno[]Multiple output validations (takes precedence over singular contract)
handover.compactionno-Context relay settings
strategyno-Matrix fan-out configuration
validationno[]Pre-execution checks
retryno-Retry and rework configuration
rework_onlynofalseOnly runs via rework trigger, not normal DAG scheduling
concurrencyno-Max parallel agent instances for this step
max_concurrent_agentsno-Alias for concurrency
threadno-Thread group ID for conversation continuity
fidelitynoautoContext fidelity: full, compact, summary, fresh
contextsno[]Ontology context filter for bounded contexts
typeno-Step type: conditional, command, or empty (prompt)
edgesno[]Graph edges for conditional routing
max_visitsno10Max visits to this step in a loop
scriptno-Shell script for command type steps
pipelineno-Child pipeline name for sub-pipeline steps
inputno-Input template for child pipeline
configno-Sub-pipeline configuration
iterateno-Iterate over items (parallel fan-out)
branchno-Branch for conditional pipeline selection
gateno-Gate for approval or polling
loopno-Loop for feedback loops
aggregateno-Aggregate for output collection

Step Definition

Basic Step

yaml
steps:
  - id: analyze
    persona: navigator
    exec:
      type: prompt
      source: "Analyze: {{ input }}"

Step with Dependencies

yaml
steps:
  - id: implement
    persona: craftsman
    dependencies: [analyze, plan]
    exec:
      type: prompt
      source: "Implement the feature"

Step with Artifact Injection

yaml
steps:
  - id: review
    persona: auditor
    dependencies: [implement]
    memory:
      strategy: fresh
      inject_artifacts:
        - step: implement
          artifact: code
          as: changes
    exec:
      type: prompt
      source: "Review the changes"

Exec Configuration

Prompt Execution

yaml
exec:
  type: prompt
  source: |
    Analyze the codebase for {{ input }}.
    Report file paths and architectural patterns.

Prompt from File

yaml
exec:
  type: prompt
  source_path: .wave/prompts/analyze.md

Use source_path to keep long prompts in separate files. The file path is relative to the project root.

Command Execution

yaml
exec:
  type: command
  source: "go test -v ./..."

Slash Command Execution

yaml
exec:
  type: slash_command
  command: review-pr
  args: "123"

Slash command execution invokes a Claude Code slash command (e.g., /review-pr) within the adapter session. The command field specifies the slash command name (without the leading /), and args provides the arguments.

FieldRequiredDescription
commandyesSlash command name (without / prefix)
argsnoArguments to pass to the slash command

Template Variables

VariableScopeDescription
{{ input }}All stepsPipeline input from --input
{{ task }}Matrix stepsCurrent matrix item
{{ pipeline_id }}All stepsUnique pipeline run ID
{{ project.test_command }}All stepsTest command from wave.yaml
{{ project.contract_test_command }}All stepsContract test command from wave.yaml
{{ forge.cli_tool }}All stepsDetected forge CLI (gh, glab)
{{ forge.type }}All stepsForge type (github, gitlab)
{{ forge.pr_term }}All stepsPR terminology (pull request, merge request)
{{ forge.pr_command }}All stepsPR command (pr, mr)

Model Routing

Override the model tier or specific model at the step level. See the Model Routing Guide for full details.

yaml
steps:
  - id: triage
    persona: navigator
    model: balanced
    exec:
      type: prompt
      source: "Classify the issue"

  - id: implement
    persona: craftsman
    model: strongest
    exec:
      type: prompt
      source: "Implement the solution"

Valid model tiers: cheapest, balanced, strongest. You can also specify exact model names (e.g., claude-haiku-4-5).


Output Artifacts

Declare files produced by a step:

yaml
output_artifacts:
  - name: analysis
    path: .wave/output/analysis.json
    type: json
  - name: report
    path: .wave/output/report.md
    type: markdown
  - name: stdout-capture
    source: stdout
    type: json
FieldRequiredDefaultDescription
nameyes-Artifact identifier
pathconditional-File path relative to workspace. Optional when source: stdout.
typenofilejson, markdown, file, binary, directory
sourcenofilefile (default) or stdout to capture from standard output
requirednofalseIf true, missing artifact fails the step

Outcomes

Outcomes extract structured results from step artifacts into the pipeline output summary. Use outcomes to surface PR URLs, issue links, deployment URLs, or other key results. See the Outcomes Guide for patterns.

yaml
outcomes:
  - type: pr
    extract_from: output/publish-result.json
    json_path: ".pr_url"
    label: "Pull Request"
  - type: url
    extract_from: output/publish-result.json
    json_path: ".deploy_urls[*]"
    json_path_label: ".label"
    label: "Deployment"
  - type: file
    extract_from: output/report.md
    label: "Analysis Report"

Outcome Fields

FieldRequiredDescription
typeyesOutcome type: pr, issue, url, deployment, file, artifact
extract_fromyesArtifact path relative to workspace (e.g., output/publish-result.json)
json_pathconditionalDot notation path to extract the value. Required for pr, issue, url, deployment.
json_path_labelnoLabel extraction path for array items (used with [*] in json_path)
labelnoHuman-readable label for display in the output summary

Supported Outcome Types

TypeDescription
prPull request URL
issueIssue URL
urlGeneric URL
deploymentDeployment URL
fileFile deliverable (uses extract_from as path)
artifactArtifact deliverable (uses extract_from as path)

Artifact Injection

Import artifacts from prior steps:

yaml
memory:
  strategy: fresh
  inject_artifacts:
    - step: analyze
      artifact: analysis
      as: context
    - step: plan
      artifact: tasks
      as: task_list
      optional: true
    - pipeline: other-pipeline
      artifact: report
      as: upstream_report
FieldRequiredDefaultDescription
stepconditional-Source step ID (mutually exclusive with pipeline)
pipelineconditional-Cross-pipeline artifact source name
artifactyes-Artifact name from source step or pipeline
asyes-Name in current workspace
typeno-Expected artifact type for validation
schema_pathno-JSON schema path for input validation
optionalnofalseIf true, missing artifact does not fail the step

Artifacts are copied to .wave/artifacts/<as>/ in the step workspace.


Workspace Configuration

yaml
workspace:
  type: worktree
  branch: "{{ pipeline_id }}"
  base: main
FieldRequiredDefaultDescription
typeno-worktree for git worktree workspaces
branchnoautoBranch name for the worktree. Supports template variables. Steps sharing the same branch share the same worktree.
basenoHEADStart point for the worktree (e.g., main)
refno-Reference another step's workspace (shared worktree)

When type is worktree, Wave creates a git worktree via git worktree add on the specified branch. If the branch doesn't exist, it's created from HEAD. Multiple steps with the same resolved branch reuse the same worktree directory.

Mount Workspace

yaml
workspace:
  mount:
    - source: ./src
      target: /code
      mode: readonly
    - source: ./test-fixtures
      target: /fixtures
      mode: readonly
FieldRequiredDefaultDescription
mount[].sourceyes-Source directory
mount[].targetyes-Mount point in workspace
mount[].modenoreadonlyreadonly or readwrite

Basic Directory Workspace

yaml
workspace:
  root: ./

Creates an empty workspace directory. The root field is the base path (relative to project root).


Contracts

Validate step output before proceeding.

Test Suite Contract

yaml
handover:
  contract:
    type: test_suite
    command: "npm test"

JSON Schema Contract

yaml
handover:
  contract:
    type: json_schema
    schema_path: .wave/contracts/analysis.schema.json
    source: .wave/output/analysis.json
    on_failure: retry
    max_retries: 2

TypeScript Contract

yaml
handover:
  contract:
    type: typescript_interface
    source: .wave/output/types.ts
    validate: true

Multiple Contracts

When a step requires multiple validations, use the plural contracts field. It takes precedence over the singular contract.

yaml
handover:
  contracts:
    - type: json_schema
      schema_path: .wave/contracts/output.schema.json
      source: .wave/output/result.json
    - type: test_suite
      command: "go test ./..."
      dir: project_root

LLM Judge Contract

yaml
handover:
  contract:
    type: llm_judge
    model: claude-haiku-4-5
    criteria:
      - "Output is well-structured JSON"
      - "All required fields are present"
    threshold: 0.8
    source: .wave/output/result.json

Agent Review Contract

yaml
handover:
  contract:
    type: agent_review
    persona: auditor
    criteria_path: .wave/contracts/review-criteria.md
    context:
      - source: git_diff
      - source: artifact
        artifact: implementation
    token_budget: 50000
    timeout: "120s"
    on_failure: rework
    rework_step: fix-implementation

Contract Fields

FieldRequiredDefaultDescription
typeyes-test_suite, json_schema, typescript_interface, markdown_spec, format, non_empty_file, llm_judge, agent_review
commanddepends-Test command (for test_suite)
schema_pathdepends-Schema path (for json_schema)
sourcedepends-File to validate
dirnoworkspaceWorking directory: project_root, absolute path, or empty for workspace
must_passnotrueWhether failure blocks progression
on_failurenoretryretry, halt, rework, warn
max_retriesno2Maximum retry attempts
modelno-LLM model (for llm_judge)
criteriano-Evaluation criteria list (for llm_judge)
thresholdno1.0Pass threshold 0.0-1.0 (for llm_judge)
personano-Reviewer persona (for agent_review)
criteria_pathno-Review criteria file (for agent_review)
contextno-Context sources for reviewer (for agent_review)
token_budgetnounlimitedMax tokens for review agent
timeoutno-Duration string for review timeout (e.g., 60s)
rework_stepno-Step to run on review failure with on_failure: rework

Compaction

Configure context relay for long-running steps.

yaml
handover:
  compaction:
    trigger: "token_limit_80%"
    persona: summarizer
FieldDefaultDescription
triggertoken_limit_80%When to trigger relay
personasummarizerPersona for checkpoints

Matrix Strategy

Fan-out parallel execution from a task list.

yaml
steps:
  - id: plan
    persona: philosopher
    exec:
      type: prompt
      source: "Break down into tasks. Output: {\"tasks\": [...]}"
    output_artifacts:
      - name: tasks
        path: .wave/output/tasks.json

  - id: execute
    persona: craftsman
    dependencies: [plan]
    strategy:
      type: matrix
      items_source: plan/tasks.json
      item_key: task
      max_concurrency: 4
    exec:
      type: prompt
      source: "Execute: {{ task }}"
FieldRequiredDefaultDescription
typeyes-Must be matrix
items_sourceyes-Path to JSON task list
item_keyyes-JSON key for task items
max_concurrencynoruntime defaultParallel workers
item_id_keyno-JSON key for unique item identifiers
dependency_keyno-JSON key for inter-item dependencies
child_pipelineno-Pipeline name to invoke per item (instead of inline step)
input_templateno-Template for child pipeline input
stackednofalseIf true, items share cumulative context

Pre-Execution Validation

Check conditions before step runs.

yaml
validation:
  - type: file_exists
    target: src/models/user.go
    message: "User model required"
  - type: command
    target: "go build ./..."
    message: "Project must compile"
FieldRequiredDescription
typeyesfile_exists, command, schema
targetyesFile path or command
messagenoCustom error message

Threads

Steps sharing the same thread value participate in a conversation thread. Each step receives transcripts from prior steps in the same thread, enabling multi-step reasoning chains. See the Threads Guide for patterns.

yaml
steps:
  - id: research
    persona: navigator
    thread: analysis
    fidelity: full
    exec:
      type: prompt
      source: "Research: {{ input }}"

  - id: synthesize
    persona: navigator
    thread: analysis
    fidelity: compact
    dependencies: [research]
    exec:
      type: prompt
      source: "Synthesize findings"

  - id: implement
    persona: craftsman
    thread: impl
    dependencies: [synthesize]
    exec:
      type: prompt
      source: "Implement based on synthesis"

Fidelity Levels

LevelDescription
fullComplete conversation history (default when thread is set)
compactStep ID + status + truncated content summary
summaryLLM-generated summary via compaction adapter
freshNo prior context (default when no thread)

Thread Fields

FieldDefaultDescription
thread-Thread group ID. Steps with the same thread share conversation context.
fidelityfull (if thread set), fresh (if no thread)How much prior context to inject.

Contexts

Filter which ontology bounded contexts are injected into a step. When set, only the specified contexts are provided to the persona, reducing noise for focused steps.

yaml
steps:
  - id: security-review
    persona: auditor
    contexts: [security, authentication]
    exec:
      type: prompt
      source: "Review for security vulnerabilities"

  - id: api-design
    persona: navigator
    contexts: [api, contracts]
    exec:
      type: prompt
      source: "Design the API surface"

When contexts is omitted, the step receives all available context.


Edges

Edges define conditional routing between steps in graph-mode pipelines. Use edges to create loops, branching, and conditional execution. See the Graph Loops Guide for patterns.

yaml
steps:
  - id: implement
    persona: craftsman
    thread: impl
    exec:
      type: prompt
      source: "Implement the feature"

  - id: test
    type: command
    dependencies: [implement]
    script: "go test ./..."

  - id: check
    type: conditional
    dependencies: [test]
    edges:
      - target: finalize
        condition: "outcome=success"
      - target: implement

  - id: finalize
    persona: navigator
    dependencies: [check]

Edge Fields

FieldRequiredDescription
targetyesTarget step ID to route to
conditionnoCondition for this edge (e.g., outcome=success). The first edge without a condition is the default fallback.

Step Types for Graph Mode

TypePurposeNeeds Persona?
(empty)LLM persona executionYes
commandShell script executionNo
conditionalRoute based on prior step outcomeNo

Max Visits

Prevent infinite loops with visit limits:

yaml
steps:
  - id: fix
    persona: craftsman
    max_visits: 5
    exec:
      type: prompt
      source: "Fix the failing tests"
FieldDefaultDescription
max_visits10Max times a step can be visited in a graph loop
max_step_visits50Pipeline-level total visit limit across all steps

Gates

Gate steps pause pipeline execution for human decisions, CI events, or timers. See the Human Gates Guide for patterns.

Approval Gate

yaml
steps:
  - id: approve
    gate:
      type: approval
      prompt: "Review the implementation plan"
      choices:
        - label: "Approve"
          key: "a"
          target: implement
        - label: "Revise"
          key: "r"
          target: plan
        - label: "Abort"
          key: "q"
          target: _fail
      freeform: true
      default: "a"
      timeout: "1h"
    dependencies: [plan]

PR Merge Gate

yaml
steps:
  - id: wait-merge
    gate:
      type: pr_merge
      pr_number: 123
      repo: "owner/repo"
      interval: "30s"
      timeout: "2h"
    dependencies: [create-pr]

CI Pass Gate

yaml
steps:
  - id: wait-ci
    gate:
      type: ci_pass
      branch: feature-branch
      interval: "30s"
      timeout: "30m"
    dependencies: [push]

Timer Gate

yaml
steps:
  - id: cooldown
    gate:
      type: timer
      timeout: "5m"
      message: "Cooling down before next phase"
    dependencies: [deploy]

Gate Fields

FieldRequiredDefaultDescription
typeyes-approval, pr_merge, ci_pass, timer
timeoutno-Duration before auto-resolving (e.g., 30m, 2h)
messageno-Display message while waiting
autonofalseAuto-approve (for CI/testing)
promptno-Prompt text for approval gates
choicesno-Interactive choice options for approval gates
freeformnofalseAllow freeform text input alongside choices
defaultno-Default choice key (used on timeout or auto-approve)
pr_numberno-PR number for pr_merge gates
reponoauto-detectowner/repo slug for poll gates
branchnoauto-detectBranch name for ci_pass gates
intervalno30sPoll interval for pr_merge and ci_pass gates

Gate Choice Fields

FieldRequiredDescription
labelyesHuman-readable label (e.g., "Approve")
keyyesKeyboard shortcut key (e.g., "a")
targetnoTarget step ID on selection, or _fail to abort the pipeline

Iterate

Iterate over a collection of items, executing a child pipeline for each. Use iterate for parallel fan-out over dynamic item lists. See the Composition Guide for patterns.

yaml
steps:
  - id: process-items
    iterate:
      over: "{{ steps.plan.artifacts.items }}"
      mode: parallel
      max_concurrent: 3
    pipeline: process-single-item
    input: "{{ item }}"
    config:
      inject: [context]
      extract: [result]
    dependencies: [plan]

Iterate Fields

FieldRequiredDefaultDescription
overyes-Template expression resolving to a JSON array
modeyes-sequential or parallel
max_concurrentno-Max parallel workers (only for parallel mode)

Branch

Conditional pipeline selection based on a runtime value. Use branch to route execution to different pipelines based on step results. See the Composition Guide for patterns.

yaml
steps:
  - id: classify
    persona: navigator
    exec:
      type: prompt
      source: "Classify the issue as: bug, feature, or docs"
    output_artifacts:
      - name: classification
        path: .wave/output/classification.json
        type: json

  - id: route
    branch:
      on: "{{ steps.classify.artifacts.classification.type }}"
      cases:
        bug: impl-bugfix
        feature: impl-feature
        docs: doc-update
        _default: skip
    dependencies: [classify]

Branch Fields

FieldRequiredDescription
onyesTemplate expression to evaluate
casesyesMap of value to pipeline name. Use skip for no-op.

Loop

Feedback loops execute sub-steps repeatedly until a condition is met or the iteration limit is reached. See the Composition Guide for patterns.

yaml
steps:
  - id: refine
    loop:
      max_iterations: 5
      until: "{{ steps.validate.outcome == 'success' }}"
      steps:
        - id: improve
          persona: craftsman
          exec:
            type: prompt
            source: "Improve the implementation"
        - id: validate
          type: command
          script: "go test ./..."
          dependencies: [improve]
    dependencies: [initial-impl]

Loop Fields

FieldRequiredDefaultDescription
max_iterationsyes-Hard limit on iterations
untilno-Template condition for early exit
stepsno-Sub-steps to execute per iteration

Aggregate

Collect and merge outputs from prior steps (typically after iterate or matrix fan-out). See the Composition Guide for patterns.

yaml
steps:
  - id: collect
    aggregate:
      from: "{{ steps.process-items.results }}"
      into: .wave/output/combined.json
      strategy: merge_arrays
      key: findings          # extract .findings from each JSON object before merging
    dependencies: [process-items]

Aggregate Fields

FieldRequiredDescription
fromyesTemplate expression for source data
intoyesOutput file path
strategyyesmerge_arrays, concat, or reduce
keynoJSON object key to extract before merging (merge_arrays only). When set, each element is expected to be an object and the value at this key (which must be an array) is extracted and merged.

Aggregation Strategies

StrategyDescription
merge_arraysMerge JSON arrays from all items into one array. When key is set, extracts the named field from each JSON object before merging.
concatConcatenate text outputs
reduceCustom reduction (requires reduce template)

Sub-Pipelines

Execute a child pipeline as a step. Use sub-pipelines for reusable workflow components. See the Composition Guide for patterns.

yaml
steps:
  - id: run-tests
    pipeline: test-suite
    input: "{{ input }}"
    config:
      inject: [implementation]
      extract: [test-results]
      timeout: "3600s"
      max_cycles: 10
      stop_condition: "{{ child.status == 'all_pass' }}"
    dependencies: [implement]

Sub-Pipeline Config Fields

FieldRequiredDefaultDescription
pipelineyes-Child pipeline name
inputno-Input template for the child pipeline
config.injectno[]Parent artifact names to inject into the child
config.extractno[]Child artifact names to extract back to parent
config.timeoutno-Hard timeout for child execution (e.g., 3600s)
config.max_cyclesno-Max iterations for child loop steps
config.stop_conditionno-Template expression for early termination

Hooks

Lifecycle hooks trigger actions on pipeline events. Hooks run shell commands, HTTP requests, LLM evaluations, or scripts at defined points in the pipeline lifecycle.

yaml
hooks:
  - name: notify-start
    event: run_start
    type: command
    command: "echo 'Pipeline {{ pipeline_id }} started'"

  - name: slack-notification
    event: run_completed
    type: http
    url: "https://hooks.slack.com/services/T.../B.../..."
    timeout: "10s"
    fail_open: true

  - name: quality-check
    event: step_completed
    type: llm_judge
    model: claude-haiku-4-5
    prompt: "Evaluate the quality of the step output"
    matcher: "step_id=implement"
    blocking: true

  - name: cleanup
    event: run_failed
    type: script
    script: |
      rm -rf /tmp/wave-cache
      echo "Cleaned up"

Hook Fields

FieldRequiredDefaultDescription
nameyes-Hook identifier
eventyes-Lifecycle event to trigger on
typeyes-command, http, llm_judge, script
commandconditional-Shell command (for command type)
urlconditional-HTTP endpoint (for http type)
modelconditional-LLM model (for llm_judge type)
promptconditional-Evaluation prompt (for llm_judge type)
scriptconditional-Shell script (for script type)
matcherno-Filter which steps trigger this hook (e.g., step_id=implement)
blockingnoevent-dependentWhether the hook blocks pipeline execution on failure
fail_opennotype-dependentIf true, hook errors do not block the pipeline
timeoutnotype-dependentDuration string (defaults: command 30s, http 10s, llm_judge 60s, script 30s)

Lifecycle Events

EventScopeDescription
run_startPipelineFires when the pipeline run begins
run_completedPipelineFires when the pipeline completes successfully
run_failedPipelineFires when the pipeline fails
step_startStepFires before a step executes
step_completedStepFires after a step completes successfully
step_failedStepFires when a step fails
step_retryingStepFires when a step is about to retry
contract_validatedStepFires after a contract passes validation
artifact_createdStepFires when an output artifact is written
workspace_createdStepFires when a workspace is provisioned

Pipeline Outputs

Named output aliases expose pipeline results for composition with other pipelines. Parent pipelines can reference these outputs when using sub-pipelines.

yaml
pipeline_outputs:
  review_url:
    step: publish
    artifact: result
    field: ".pr_url"
  summary:
    step: analyze
    artifact: report

Pipeline Output Fields

FieldRequiredDescription
stepyesSource step ID
artifactyesArtifact name from the source step
fieldnoOptional JSON field extraction (dot notation)

Chat Context

Configure what context to inject into post-pipeline interactive chat sessions. When a pipeline completes, Wave can start a chat session pre-loaded with pipeline results.

yaml
chat_context:
  artifact_summaries:
    - analysis
    - findings
  suggested_questions:
    - "What were the main security findings?"
    - "Which files need the most attention?"
  focus_areas:
    - security
    - performance
    - architecture
  max_context_tokens: 12000

Chat Context Fields

FieldDefaultDescription
artifact_summaries[]Artifact names to summarize in the chat context
suggested_questions[]Opening questions displayed to the user
focus_areas[]Areas to highlight in the chat session
max_context_tokens8000Token budget for injected context

Skills

Declarative skill references ensure required skills are available before the pipeline runs. Skills provide domain-specific capabilities to personas.

yaml
skills:
  - golang
  - docker

requires:
  skills:
    golang:
      install: "go install github.com/example/skill@latest"
      check: "which go-skill"
    docker:
      check: "docker version"
  tools:
    - gh
    - jq

Pipeline-Level skills

A list of skill names that the pipeline uses. Wave validates these are available at runtime.

Requires Block

FieldDescription
requires.skillsMap of skill name to config (install, init, check commands)
requires.toolsList of CLI tool names that must be on PATH

Skill Config Fields

FieldDescription
installCommand to install the skill
initCommand to initialize the skill after install
checkCommand to verify the skill is available
commands_globGlob pattern for skill command files

See the Skill Authoring Guide for creating custom skills.


Max Step Visits

Pipeline-level limit on total step visits across all steps in graph-mode pipelines. Prevents runaway loops.

yaml
kind: WavePipeline
metadata:
  name: iterative-fix
max_step_visits: 30

steps:
  - id: fix
    persona: craftsman
    max_visits: 10
    # ...
FieldLevelDefaultDescription
max_step_visitsPipeline50Total visits across all steps in the pipeline
max_visitsStep10Max visits for a single step

When either limit is reached, the pipeline halts with an error indicating the loop limit was exceeded.


DAG Rules

Pipeline steps form a directed acyclic graph (DAG). In graph-mode pipelines (using edges), cycles are permitted.

Enforced rules:

  • No circular dependencies in DAG mode (cycles allowed only via edges in graph mode)
  • All dependencies must reference valid step IDs
  • All persona values must exist in wave.yaml (for prompt steps)
  • Independent steps may run in parallel
yaml
steps:
  - id: analyze        # Runs first
    persona: navigator

  - id: security       # Parallel with quality
    persona: auditor
    dependencies: [analyze]

  - id: quality        # Parallel with security
    persona: auditor
    dependencies: [analyze]

  - id: summary        # Waits for both
    persona: navigator
    dependencies: [security, quality]

Retry and Rework

Control what happens when a step fails after exhausting its retry attempts.

Retry Configuration

yaml
steps:
  - id: flaky-step
    persona: craftsman
    exec:
      type: prompt
      source: "Implement feature"
    retry:
      max_attempts: 3
      backoff: exponential
      base_delay: "2s"
      max_delay: "30s"
      adapt_prompt: true
      on_failure: fail

Retry Policy Presets

Use named policies instead of configuring individual fields:

yaml
retry:
  policy: standard
PolicyAttemptsBackoffBase DelayMax Delay
none1fixed0s0s
standard3exponential1s30s
aggressive5exponential200ms30s
patient3exponential5s90s

Explicit fields override policy defaults.

Retry Fields

FieldRequiredDefaultDescription
policyno-Named preset: none, standard, aggressive, patient
max_attemptsno1Total number of attempts (1 = no retry)
backoffnolinearfixed, linear, or exponential
base_delayno1sBase delay between retries (e.g., 2s, 500ms)
max_delayno30sMaximum delay cap
adapt_promptnofalseInject prior failure context into retry prompt
on_failurenofailAction when all attempts are exhausted: fail, skip, continue, rework
rework_stepconditional-Step ID to execute when on_failure: rework. Required when on_failure is rework.

On-Failure Actions

ActionDescription
failHalt the pipeline (default)
skipMark step as skipped, continue pipeline
continueMark step as failed, continue pipeline
reworkExecute an alternative step (rework_step) as a fallback

Rework Branching

When on_failure: rework is set, the executor redirects to an alternative step after all retry attempts are exhausted:

yaml
steps:
  - id: complex-impl
    persona: craftsman
    exec:
      type: prompt
      source: "Implement the complex feature"
    retry:
      max_attempts: 2
      on_failure: rework
      rework_step: simple-impl

  - id: simple-impl
    persona: craftsman
    rework_only: true
    exec:
      type: prompt
      source: "Implement a simpler fallback"

Rework behavior:

  1. The failed step is marked as failed
  2. Failure context (error, duration, partial artifacts) is injected into the rework step's prompt
  3. The rework step executes with the failure context
  4. On success, the rework step's artifacts replace the failed step's artifacts for downstream steps
  5. If the rework step itself fails, its own on_failure policy applies

DAG validation rules for rework targets:

  • The rework target must be an existing step in the pipeline
  • The rework target cannot be an upstream dependency of the failing step
  • The failing step cannot be a dependency of the rework target
  • A step cannot rework to itself

Step States

StateDescription
pendingWaiting for dependencies
runningCurrently executing
completedFinished successfully
retryingFailed, attempting retry
reworkingRework step executing after failure
failedMax retries exceeded
skippedSkipped (dependency failed or on_failure: skip)

Next Steps

Released under the MIT License.