Skip to content

Building a New Adapter

This guide explains how to add a first-class adapter to Wave's Go codebase. A first-class adapter is a native Go type that implements the AdapterRunner interface, giving you full control over subprocess lifecycle, streaming events, sandbox integration, and workspace setup.

When to use this guide vs. ProcessGroupRunner: The generic ProcessGroupRunner (documented in Custom Adapter Example) wraps any CLI that accepts a prompt and produces output. Use it when you just need to invoke an external binary. Build a first-class adapter when you need:

  • Custom NDJSON stream parsing for real-time progress events
  • Workspace preparation (config files, system prompts)
  • Curated environment construction
  • Adapter-specific CLI argument building

The AdapterRunner Interface

Every adapter implements a single method:

go
// internal/adapter/adapter.go

type AdapterRunner interface {
    Run(ctx context.Context, cfg AdapterRunConfig) (*AdapterResult, error)
}

AdapterRunConfig

The executor passes all step configuration through AdapterRunConfig:

go
type AdapterRunConfig struct {
    Adapter       string        // Adapter name from manifest (e.g., "claude", "opencode")
    Persona       string        // Persona name for this step
    WorkspacePath string        // Absolute path to the ephemeral workspace directory
    Prompt        string        // The prompt text to send to the LLM CLI
    SystemPrompt  string        // Pre-assembled system prompt (if provided)
    Timeout       time.Duration // Step timeout (0 means adapter picks a default)
    Env           []string      // Step-specific environment variables ("KEY=VALUE")
    Temperature   float64       // LLM temperature setting
    AllowedTools  []string      // Tools the persona is allowed to use
    DenyTools     []string      // Tools explicitly denied
    OutputFormat  string        // Expected output format
    Debug         bool          // Enable debug logging
    Model         string        // Model identifier (e.g., "opus", "openai/gpt-4o")

    // Sandbox configuration (derived from manifest)
    SandboxEnabled bool     // Master switch from runtime.sandbox.enabled
    AllowedDomains []string // Network domain allowlist
    EnvPassthrough []string // Env var names to pass through from host
    SandboxBackend string   // "docker", "bubblewrap", or "none"
    DockerImage    string   // Docker image when SandboxBackend == "docker"

    // Skill provisioning
    SkillCommandsDir string     // Source directory for skill command files
    ResolvedSkills   []SkillRef // Skills resolved from hierarchical config

    // Concurrency
    MaxConcurrentAgents int // Max sub-agents the persona may spawn (0 or 1 = no hint)

    // Contract compliance prompt (auto-generated by executor from step contract)
    ContractPrompt string

    // Streaming callback
    OnStreamEvent func(StreamEvent) // Called for each real-time event; nil = ignore
}

AdapterResult

Your adapter must return an *AdapterResult:

go
type AdapterResult struct {
    ExitCode      int       // Process exit code (0 = success)
    Stdout        io.Reader // Full stdout content for downstream processing
    TokensUsed    int       // Total tokens consumed
    TokensIn      int       // Input tokens (prompt + cache creation)
    TokensOut     int       // Output tokens (completion)
    Artifacts     []string  // Artifact names extracted from output
    ResultContent string    // Extracted text content from the adapter response
    FailureReason string    // Classification: "timeout", "context_exhaustion", "general_error"
    Subtype       string    // Result event subtype (e.g., "success", "error_max_turns")
}

Skeleton Adapter

Here is a minimal adapter that wraps a hypothetical myllm CLI:

go
package adapter

import (
    "bufio"
    "bytes"
    "context"
    "fmt"
    "os/exec"
    "syscall"
    "time"
)

type MyLLMAdapter struct {
    binaryPath string
}

func NewMyLLMAdapter() *MyLLMAdapter {
    path := "myllm"
    if p, err := exec.LookPath("myllm"); err == nil {
        path = p
    }
    return &MyLLMAdapter{binaryPath: path}
}

func (a *MyLLMAdapter) Run(ctx context.Context, cfg AdapterRunConfig) (*AdapterResult, error) {
    if cfg.Timeout == 0 {
        cfg.Timeout = 10 * time.Minute
    }

    ctx, cancel := context.WithTimeout(ctx, cfg.Timeout)
    defer cancel()

    // 1. Prepare workspace (write config files, system prompt)
    if err := a.prepareWorkspace(cfg.WorkspacePath, cfg); err != nil {
        return nil, fmt.Errorf("failed to prepare workspace: %w", err)
    }

    // 2. Build CLI arguments
    args := a.buildArgs(cfg)
    cmd := exec.CommandContext(ctx, a.binaryPath, args...)
    cmd.Dir = cfg.WorkspacePath

    // 3. Curated environment
    cmd.Env = BuildCuratedEnvironment(cfg)

    // 4. Process group isolation
    cmd.SysProcAttr = &syscall.SysProcAttr{
        Setpgid: true,
        Pgid:    0,
    }

    // 5. Capture stdout with streaming
    stdoutPipe, err := cmd.StdoutPipe()
    if err != nil {
        return nil, fmt.Errorf("failed to create stdout pipe: %w", err)
    }

    if err := cmd.Start(); err != nil {
        return nil, fmt.Errorf("failed to start myllm: %w", err)
    }

    var stdoutBuf bytes.Buffer
    stdoutDone := make(chan error, 1)

    go func() {
        scanner := bufio.NewScanner(stdoutPipe)
        scanner.Buffer(make([]byte, 0, 1024*1024), 10*1024*1024)
        for scanner.Scan() {
            line := scanner.Bytes()
            stdoutBuf.Write(line)
            stdoutBuf.WriteByte('\n')

            // Parse and emit stream events
            if cfg.OnStreamEvent != nil {
                if evt, ok := a.parseStreamLine(line); ok {
                    cfg.OnStreamEvent(evt)
                }
            }
        }
        stdoutDone <- scanner.Err()
    }()

    // 6. Wait for completion or timeout
    select {
    case <-ctx.Done():
        killProcessGroup(cmd.Process)
        cmd.Wait()
        return nil, ctx.Err()
    case err := <-stdoutDone:
        if err != nil {
            return nil, fmt.Errorf("failed to read stdout: %w", err)
        }
    }

    cmdErr := cmd.Wait()
    result := &AdapterResult{
        ExitCode:   0,
        Stdout:     bytes.NewReader(stdoutBuf.Bytes()),
        TokensUsed: estimateTokens(stdoutBuf.String()),
    }

    if cmdErr != nil {
        result.ExitCode = exitCodeFromError(cmdErr)
    }

    return result, nil
}

Key patterns used by all built-in adapters:

  1. Binary lookup — use exec.LookPath in the constructor to find the binary on $PATH
  2. Default timeout — set a sensible default when cfg.Timeout == 0
  3. Process groupSetpgid: true ensures killProcessGroup can kill the entire process tree
  4. Curated environment — call BuildCuratedEnvironment(cfg) instead of inheriting os.Environ()
  5. Buffered streaming — use bufio.Scanner with a large buffer (10MB max line) for NDJSON parsing

Source files: internal/adapter/adapter.go, internal/adapter/claude.go, internal/adapter/opencode.go

Streaming Events

Adapters emit real-time progress events via the OnStreamEvent callback. These events power the TUI progress display and structured logging.

StreamEvent

go
type StreamEvent struct {
    Type      string // "tool_use", "tool_result", "text", "result", "system"
    ToolName  string // e.g., "Read", "Write", "Bash"
    ToolInput string // Summary of input (file path, command, pattern)
    Content   string // Text content or result summary
    TokensIn  int    // Cumulative input tokens
    TokensOut int    // Cumulative output tokens
    Subtype   string // Result event subtype: "success", "error_max_turns", etc.
}

Event Types

TypeWhen emittedKey fields
systemCLI initialization
textLLM generates text outputContent (truncated to 200 chars)
tool_useLLM invokes a toolToolName, ToolInput
tool_resultTool returns a result(often skipped — tool_use already reported)
resultFinal result eventTokensIn, TokensOut, Subtype, Content

The OnStreamEvent Callback

The executor sets cfg.OnStreamEvent before calling your adapter. If it is nil, skip event emission:

go
if cfg.OnStreamEvent != nil {
    if evt, ok := a.parseStreamLine(line); ok {
        cfg.OnStreamEvent(evt)
    }
}

Parsing NDJSON

Most LLM CLIs produce newline-delimited JSON (NDJSON) on stdout. Parse each line independently:

go
func (a *MyLLMAdapter) parseStreamLine(line []byte) (StreamEvent, bool) {
    line = bytes.TrimSpace(line)
    if len(line) == 0 {
        return StreamEvent{}, false
    }

    var obj map[string]json.RawMessage
    if err := json.Unmarshal(line, &obj); err != nil {
        return StreamEvent{}, false // Skip malformed lines
    }

    var eventType string
    if raw, ok := obj["type"]; ok {
        json.Unmarshal(raw, &eventType)
    }

    switch eventType {
    case "text":
        // Extract text content
        var content string
        if raw, ok := obj["content"]; ok {
            json.Unmarshal(raw, &content)
        }
        if len(content) > 200 {
            content = content[:200]
        }
        return StreamEvent{Type: "text", Content: content}, true

    case "tool_use":
        // Extract tool name and summarize input
        var toolName string
        if raw, ok := obj["name"]; ok {
            json.Unmarshal(raw, &toolName)
        }
        var input json.RawMessage
        if raw, ok := obj["input"]; ok {
            input = raw
        }
        target := extractToolTarget(toolName, input)
        return StreamEvent{
            Type:      "tool_use",
            ToolName:  toolName,
            ToolInput: target,
        }, true

    case "result":
        // Extract token usage and subtype
        var usage struct {
            InputTokens  int `json:"input_tokens"`
            OutputTokens int `json:"output_tokens"`
        }
        if raw, ok := obj["usage"]; ok {
            json.Unmarshal(raw, &usage)
        }
        var subtype string
        if raw, ok := obj["subtype"]; ok {
            json.Unmarshal(raw, &subtype)
        }
        return StreamEvent{
            Type:      "result",
            TokensIn:  usage.InputTokens,
            TokensOut: usage.OutputTokens,
            Subtype:   subtype,
        }, true

    default:
        return StreamEvent{}, false
    }
}

The extractToolTarget helper (defined in internal/adapter/claude.go) summarizes tool inputs for display — it extracts the most relevant field per tool (e.g., file_path for Read/Write, command for Bash, pattern for Grep).

Sandbox Integration

Adapters can optionally run inside a sandbox for additional isolation. Wave supports Docker and bubblewrap backends.

Docker Sandbox Wrapping

When cfg.SandboxBackend == "docker", wrap the command using the sandbox API:

go
import "github.com/recinq/wave/internal/sandbox"

if cfg.SandboxBackend == "docker" {
    sb, err := sandbox.NewSandbox(sandbox.SandboxBackendDocker)
    if err != nil {
        return nil, fmt.Errorf("failed to create docker sandbox: %w", err)
    }
    defer func() { _ = sb.Cleanup(ctx) }()

    sandboxCfg := sandbox.Config{
        Backend:        sandbox.SandboxBackendDocker,
        DockerImage:    cfg.DockerImage,
        AllowedDomains: cfg.AllowedDomains,
        EnvPassthrough: cfg.EnvPassthrough,
        WorkspacePath:  cfg.WorkspacePath,
        Debug:          cfg.Debug,
    }
    cmd, err = sb.Wrap(ctx, cmd, sandboxCfg)
    if err != nil {
        return nil, fmt.Errorf("failed to wrap command in sandbox: %w", err)
    }
}

The Sandbox interface (internal/sandbox/sandbox.go):

go
type Sandbox interface {
    Wrap(ctx context.Context, cmd *exec.Cmd, cfg Config) (*exec.Cmd, error)
    Validate() error
    Cleanup(ctx context.Context) error
}

Wrap transforms the exec.Cmd to run inside the container. Cleanup removes temporary resources after execution.

Curated Environment

All first-class adapters use BuildCuratedEnvironment instead of inheriting the full host environment. This prevents credential leakage:

go
// internal/adapter/environment.go

func BuildCuratedEnvironment(cfg AdapterRunConfig) []string {
    env := []string{
        "HOME=" + os.Getenv("HOME"),
        "PATH=" + os.Getenv("PATH"),
        "TERM=" + getenvDefault("TERM", "xterm-256color"),
        "TMPDIR=/tmp",
    }

    // Add explicitly allowed env vars from manifest
    for _, key := range cfg.EnvPassthrough {
        if val := os.Getenv(key); val != "" {
            env = append(env, key+"="+val)
        }
    }

    // Step-specific env vars (from pipeline config)
    env = append(env, cfg.Env...)
    return env
}

Only four base variables plus those in runtime.sandbox.env_passthrough reach the subprocess. Adapter-specific variables (e.g., telemetry suppression) are appended afterward.

Domain Filtering

When cfg.AllowedDomains is populated (from the manifest's runtime.sandbox.default_allowed_domains or persona-level sandbox.allowed_domains), you can restrict network access. The Claude adapter projects this into settings.json:

go
if cfg.SandboxEnabled {
    settings.Sandbox = &SandboxSettings{
        Enabled: true,
        Network: &NetworkSettings{
            AllowedDomains: cfg.AllowedDomains,
        },
    }
}

For adapters that don't read a settings file, implement domain filtering at the process level (iptables rules, proxy configuration, or sandbox network policy).

Process Group Isolation

All adapters set Setpgid: true to create a process group. The shared killProcessGroup function handles graceful shutdown:

go
func killProcessGroup(process *os.Process) {
    _ = syscall.Kill(-process.Pid, syscall.SIGTERM)  // Graceful shutdown
    go func() {
        time.Sleep(3 * time.Second)
        _ = syscall.Kill(-process.Pid, syscall.SIGKILL)  // Force kill after 3s
    }()
}

This ensures the adapter and all child processes are terminated on timeout or cancellation.

Workspace Setup

Before running the LLM CLI, first-class adapters prepare the workspace with configuration files.

Agent .md File Assembly

The Claude adapter compiles persona configuration into a self-contained agent .md file with YAML frontmatter. The file is written to .claude/wave-agent.md and passed via --agent:

YAML frontmatter contains:

  • model — the LLM model identifier
  • tools — allowed tool list (passed through verbatim from persona config)
  • disallowedTools — denied tool list (includes auto-injected TodoWrite)
  • permissionMode: bypassPermissions — always set

Body is assembled from four layers:

  1. Base protocol preamble — shared across all personas (.wave/personas/base-protocol.md)
  2. Persona system prompt — role, responsibilities, constraints
  3. Contract compliance section — auto-generated from step contract schema (appended to user prompt, not agent .md body)
  4. Restriction section — denied/allowed tools and network domains

Your adapter may use a different mechanism (e.g., AGENTS.md for OpenCode, a custom config file for other CLIs).

settings.json Generation

The Claude adapter generates .claude/settings.json only when sandbox is enabled. Model, permissions, and tools are embedded in the agent frontmatter instead:

go
if cfg.SandboxEnabled {
    settings := SandboxOnlySettings{
        Sandbox: &SandboxSettings{
            Enabled:                  true,
            AllowUnsandboxedCommands: false,
            AutoAllowBashIfSandboxed: true,
        },
    }
}

Skill Command Copying

When cfg.SkillCommandsDir is set, the adapter copies .md skill command files into the workspace's command directory (e.g., .claude/commands/ for Claude). This makes skills available to the persona during execution.

Registration

Adding to ResolveAdapter

Register your adapter in internal/adapter/opencode.go:

go
func ResolveAdapter(adapterName string) AdapterRunner {
    switch strings.ToLower(adapterName) {
    case "claude":
        return NewClaudeAdapter()
    case "opencode":
        return NewOpenCodeAdapter()
    case "browser":
        return NewBrowserAdapter()
    case "myllm":                    // Add your adapter here
        return NewMyLLMAdapter()
    default:
        return NewProcessGroupRunner()
    }
}

Manifest Configuration

Add the adapter to wave.yaml:

yaml
adapters:
  myllm:
    binary: myllm
    mode: headless
    output_format: json
    default_permissions:
      allowed_tools: ["Read", "Write", "Edit", "Bash"]
      deny: []

personas:
  my-coder:
    adapter: myllm
    model: my-model-v1
    temperature: 0.7

Testing Patterns

MockAdapter with Functional Options

Use MockAdapter to test pipeline execution without real LLM calls:

go
import "github.com/recinq/wave/internal/adapter"

mock := adapter.NewMockAdapter(
    adapter.WithStdoutJSON(`{"status": "success", "output": "done"}`),
    adapter.WithExitCode(0),
    adapter.WithTokensUsed(5000),
)

result, err := mock.Run(ctx, adapter.AdapterRunConfig{
    Prompt:        "test prompt",
    WorkspacePath: "/tmp/workspace",
})

Available options:

OptionDescription
WithStdoutJSON(json)Set the stdout content returned by the mock
WithExitCode(code)Set the process exit code
WithTokensUsed(n)Set the token count
WithSimulatedDelay(d)Add a delay before returning (useful for timeout tests)
WithFailure(err)Make Run return an error

configCapturingAdapter

To inspect what configuration the executor passes to your adapter, wrap MockAdapter with a capturing layer:

go
type configCapturingAdapter struct {
    *adapter.MockAdapter
    mu         sync.Mutex
    lastConfig adapter.AdapterRunConfig
}

func (a *configCapturingAdapter) Run(ctx context.Context, cfg adapter.AdapterRunConfig) (*adapter.AdapterResult, error) {
    a.mu.Lock()
    a.lastConfig = cfg
    a.mu.Unlock()
    return a.MockAdapter.Run(ctx, cfg)
}

func (a *configCapturingAdapter) getLastConfig() adapter.AdapterRunConfig {
    a.mu.Lock()
    defer a.mu.Unlock()
    return a.lastConfig
}

Usage in tests:

go
capturingAdapter := &configCapturingAdapter{
    MockAdapter: adapter.NewMockAdapter(
        adapter.WithStdoutJSON(`{"status": "success"}`),
        adapter.WithTokensUsed(100),
    ),
}

// Run pipeline with the capturing adapter...

cfg := capturingAdapter.getLastConfig()
assert.Equal(t, 30*time.Minute, cfg.Timeout)
assert.Equal(t, "myllm", cfg.Adapter)

Integration Tests with ProcessGroupRunner

For integration tests that exercise real subprocess execution, use ProcessGroupRunner with a simple shell command:

go
func TestProcessGroupRunner_BasicExecution(t *testing.T) {
    runner := adapter.NewProcessGroupRunner()

    result, err := runner.Run(context.Background(), adapter.AdapterRunConfig{
        Adapter:       "echo",
        Prompt:        `{"status": "ok"}`,
        WorkspacePath: t.TempDir(),
        Timeout:       5 * time.Second,
    })

    require.NoError(t, err)
    assert.Equal(t, 0, result.ExitCode)
}

Table-Driven Test Structure

Follow Go conventions with table-driven tests:

go
func TestMyLLMAdapter_BuildArgs(t *testing.T) {
    tests := []struct {
        name     string
        cfg      adapter.AdapterRunConfig
        wantArgs []string
    }{
        {
            name: "basic prompt",
            cfg: adapter.AdapterRunConfig{
                Prompt: "hello",
                Model:  "my-model",
            },
            wantArgs: []string{"--prompt", "hello", "--model", "my-model"},
        },
        {
            name: "with debug",
            cfg: adapter.AdapterRunConfig{
                Prompt: "hello",
                Debug:  true,
            },
            wantArgs: []string{"--prompt", "hello", "--verbose"},
        },
    }

    for _, tt := range tests {
        t.Run(tt.name, func(t *testing.T) {
            a := NewMyLLMAdapter()
            got := a.buildArgs(tt.cfg)
            assert.Equal(t, tt.wantArgs, got)
        })
    }
}

Further Reading

  • Adapters Concept — how adapters fit into Wave's architecture, subprocess lifecycle, and credential handling
  • Adapters Reference — complete field reference for Claude, OpenCode, GitHub, and browser adapters
  • Custom Adapter Example — wrapping an external CLI via manifest configuration (no Go code required)

Released under the MIT License.