Skip to content

Example: Custom Adapter

How to wrap a custom LLM CLI as a Wave adapter, enabling it to participate in pipelines alongside Claude Code.

Prerequisites

Your LLM CLI must support:

  1. Prompt input — accept a prompt via command-line argument or stdin.
  2. Headless mode — run non-interactively as a subprocess.
  3. Structured output — produce JSON output (preferred) or parseable text.

Step 1: Define the Adapter

Add the adapter to your wave.yaml:

yaml
adapters:
  claude:
    binary: claude
    mode: headless
    output_format: json

  # Custom adapter for a local LLM
  local-llm:
    binary: ollama
    mode: headless
    output_format: json
    default_permissions:
      allowed_tools: ["Read", "Write"]
      deny: ["Bash(rm *)", "Bash(curl *)"]

Step 2: Create Personas Using the Adapter

yaml
personas:
  # Fast navigator using local model
  local-navigator:
    adapter: local-llm
    description: "Quick codebase analysis with local model"
    system_prompt_file: .wave/personas/navigator.md
    temperature: 0.1
    permissions:
      allowed_tools: ["Read", "Glob", "Grep"]
      deny: ["Write(*)", "Bash(*)"]

  # Cloud-powered implementation
  craftsman:
    adapter: claude
    description: "Implementation with Claude"
    system_prompt_file: .wave/personas/craftsman.md
    temperature: 0.7

Step 3: Mix Adapters in a Pipeline

Use different adapters for different steps based on their requirements:

yaml
kind: WavePipeline
metadata:
  name: hybrid-flow
  description: "Local model for analysis, cloud model for implementation"

steps:
  - id: navigate
    persona: local-navigator    # Uses local LLM — fast, free
    memory:
      strategy: fresh
    exec:
      type: prompt
      source: "Analyze the codebase structure for: {{ input }}"
    output_artifacts:
      - name: analysis
        path: .wave/output/analysis.json

  - id: implement
    persona: craftsman          # Uses Claude — higher quality
    dependencies: [navigate]
    memory:
      strategy: fresh
      inject_artifacts:
        - step: navigate
          artifact: analysis
          as: context
    exec:
      type: prompt
      source: "Implement based on the analysis: {{ input }}"
    handover:
      contract:
        type: test_suite
        command: "npm test"
        must_pass: true

Step 4: Validate

bash
$ wave validate --verbose
 Adapter 'claude' binary found on PATH
 Adapter 'local-llm' binary found on PATH
 Persona 'local-navigator' references adapter 'local-llm'
 Persona 'craftsman' references adapter 'claude'
 Pipeline 'hybrid-flow' DAG is valid

Adapter Wrapper Script

If your LLM CLI doesn't natively support headless JSON output, write a wrapper:

bash
#!/bin/bash
# .wave/bin/my-llm-wrapper
# Wraps a CLI to produce JSON output compatible with Wave

PROMPT="$1"
WORKSPACE="$(pwd)"

# Invoke the actual CLI
RESULT=$(my-llm-cli --prompt "$PROMPT" --no-interactive 2>/dev/null)

# Output as JSON
echo "{\"output\": $(echo "$RESULT" | jq -Rs .), \"status\": \"completed\"}"

Then reference the wrapper:

yaml
adapters:
  custom:
    binary: .wave/bin/my-llm-wrapper
    mode: headless
    output_format: json

Environment Variables

Each adapter can use different credentials. They're all inherited from the parent process:

bash
# Set credentials for both adapters
export ANTHROPIC_API_KEY="sk-ant-..."    # For Claude
export OLLAMA_HOST="http://localhost:11434"  # For local Ollama

wave run hybrid-flow \
  --input "add feature"

When to Use Multiple Adapters

ScenarioStrategy
Cost optimizationLocal model for analysis, cloud for implementation
SpeedFast local model for navigation, thorough cloud model for review
ComplianceOn-premise model for sensitive code, cloud for public code
EvaluationRun same pipeline with different models to compare output

Further Reading

Released under the MIT License.