Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

devaipod

Sandboxed AI coding agents in reproducible dev environments using podman pods

Run AI agents with confidence: your code in a devcontainer, the agent in a separate container that only has limited access to the host system and limited network credentials (e.g. Github token).

Combines in an opinionated way:

On the topic of AI

This tool is primarily authored by @cgwalters who would "un-invent" large language models if he could because he believes the long term negatives for society as a whole are likely to outweigh the gains. But since that's not possible, this project is about maximizing the positive aspects of LLMs with a focus on software production (but not exclusively). We need to use LLMs safely and responsibly, with efficient human-in-the-loop controls and auditability.

If you want to use LLMs, but have concerns about e.g. prompt injection attacks from un-sandboxed agent use especially with unbound access to your machine secrets (especially e.g. Github token): then devaipod can help you.

To be clear, this project is itself extensively built with AI (mostly Claude Opus), but the author reviews the output (to varying degrees) - it's not "vibe coded". The emphasis of this project is more on making it easier to use AI in a sandboxed way, but of course there's a spectrum here, and nothing stops one from using it for closer-to-vibe-coding cases.

How It Works

devaipod implements a subset of the devcontainer specification, and launches multiple containers in a single pod when a task is created. At the current time, each task must have at least one git repository.

  1. devaipod launch <git repository> <task> is started (via web UI, TUI or CLI)
  2. Creates a workspace volume and clones that repository into it
  3. Creates a podman pod with multiple components (unsandboxed workspace, sandboxed agent, API pod)

Each devcontainer pod is isolated from each other by default, and from the host. pods only have what you explictly provide via environment variables, bind mounts etc. At the current time networking is unrestricted by default, but we aim to support restricting it further.

Requirements

  • podman (rootless works, including inside toolbox containers)
  • A devcontainer image with opencode and git installed (e.g., devenv-debian)

License

Apache-2.0 OR MIT

Quick Start

Before you start

devaipod is opinionated, but also is designed to be very configurable about the execution environment.

devcontainer required

A core assumption of this project is that your software needs your tools; your version of npm/Rust/Go etc. using your preferred base OS.

The default solution from this project is devcontainers. In particular, you must have a container image with opencode and git installed alongside your tools.

OpenCode configuration strongly encouraged

While OpenCode does run out of the box with a $0 "Zen" model, a foundational assumption of this project is that in general, you will want to configure at least the provider to use your organization's model(s).

Further, the author of this project is very strongly of the opinion that everyone should write an AGENTS.md that defines your style and rules - don't just accept stock model output!

The encouraged solution to both of these is to create a "dotfiles" git repository. This is not a new concept, it's already supported by popular devcontainer tools, and this project is one of them.

Example dotfiles with opencode config

  • cgwalters, specifically look at https://github.com/cgwalters/homegit/tree/main/dotfiles/.config/opencode

Installation

devaipod is distributed as a prebuilt container image at ghcr.io/cgwalters/devaipod:latest. All you need on the host is Podman (rootless is fine).

Create podman secrets

devaipod passes credentials to agent containers via podman secrets. Create at least your LLM API key:

echo "$ANTHROPIC_API_KEY" | podman secret create anthropic_api_key -
# Optional: GitHub token for service-gator
echo "$GH_TOKEN" | podman secret create gh_token -

macOS note: On macOS with podman machine, verify secrets are visible inside the VM with podman secret list. If you switched machines or secrets aren't showing up, you may need to recreate them.

GHCR note: If you get a 403 pulling ghcr.io/cgwalters/service-gator, you may need to authenticate: podman login ghcr.io

Create a configuration file

Create ~/.config/devaipod.toml referencing your secrets (see Configuration for full options):

[trusted]
secrets = [
  "ANTHROPIC_API_KEY=anthropic_api_key",
  # "GH_TOKEN=gh_token",
]

Start the devaipod daemon

The devaipod container runs as a long-lived daemon. It needs access to the host's podman socket so it can create sibling containers (workspace pods) on the host.

On Linux (rootless podman):

SOCKET=$XDG_RUNTIME_DIR/podman/podman.sock
podman volume create devaipod-state
podman run -d --name devaipod --privileged --replace \
  -p 8080:8080 \
  --add-host=host.containers.internal:host-gateway \
  -v $SOCKET:/run/docker.sock -e DEVAIPOD_HOST_SOCKET=$SOCKET \
  -v devaipod-state:/var/lib/devaipod \
  -v ~/.config/devaipod.toml:/root/.config/devaipod.toml:ro \
  ghcr.io/cgwalters/devaipod:latest

On macOS (podman machine):

On macOS, podman runs inside a Linux VM. The volume source for the socket must be the path inside the VM, not the Mac-side path. For a rootful machine that is /run/podman/podman.sock.

podman volume create devaipod-state
podman run -d --name devaipod --privileged --replace \
  -p 8080:8080 \
  -v /run/podman/podman.sock:/run/docker.sock \
  -e DEVAIPOD_HOST_SOCKET=/run/podman/podman.sock \
  -v devaipod-state:/var/lib/devaipod \
  -v ~/.config/devaipod.toml:/root/.config/devaipod.toml:ro \
  ghcr.io/cgwalters/devaipod:latest

Once started, open the web UI at http://127.0.0.1:8080/ -- this is the primary way to interact with devaipod. You can create workspaces, kick off tasks, and monitor agent progress from the browser.

Why --privileged and DEVAIPOD_HOST_SOCKET?

--privileged is required for access to the mounted podman socket and for spawning workspace containers.

DEVAIPOD_HOST_SOCKET tells devaipod the host-side path of the socket. When devaipod creates sibling containers, bind mount sources are resolved by the host's container daemon, not inside the devaipod container. On rootless Linux the host path (e.g. /run/user/1000/podman/podman.sock) differs from the container-internal /run/docker.sock.

We do not use --network host. Instead, the --add-host flag (Linux only; unnecessary on macOS) lets devaipod reach pod-published ports via host.containers.internal. Override with DEVAIPOD_HOST_GATEWAY if needed.

State volume

The devaipod-state volume persists the web UI auth token across container restarts. Token lookup order: (1) podman secret /run/secrets/devaipod-web-token if provided, (2) /var/lib/devaipod/web-token from the state volume. If no token exists, one is generated on first start.

Running tasks

The web UI at http://127.0.0.1:8080/ is the primary interface for creating workspaces, launching tasks, and monitoring agent progress.

For CLI usage, all commands are executed inside the daemon container via podman exec:

# Launch a task (service-gator auto-configured for GitHub URLs):
podman exec devaipod devaipod run https://github.com/org/repo -c 'fix typos in README.md'

# From an issue URL (default task is "Fix <issue_url>"):
podman exec devaipod devaipod run https://github.com/org/repo/issues/123

# Start a workspace with idle agents for manual interaction:
podman exec -ti devaipod devaipod up https://github.com/org/repo

# List workspaces:
podman exec devaipod devaipod list

A TUI is also available for terminal-based monitoring:

# Attach to the agent:
podman exec -ti devaipod devaipod attach <workspace>

# Attach to the worker (requires orchestration enabled):
podman exec -ti devaipod devaipod attach <workspace> --worker

# Get a shell in the workspace container:
podman exec -ti devaipod devaipod exec <workspace> -W

Service-gator: GitHub Access for the Agent

service-gator provides scope-controlled GitHub access (read PRs/issues, create drafts, etc.) to the AI agent without exposing your GH_TOKEN directly.

Automatic for GitHub URLs: When you run devaipod run https://github.com/... or devaipod run https://github.com/.../pull/123, service-gator is auto-enabled with read + draft PR permissions for that repository.

Recommended: Global read-only config. Create a podman secret for your GitHub token (echo 'ghp_...' | podman secret create gh_token -), then add to ~/.config/devaipod.toml:

[trusted]
secrets = ["GH_TOKEN=gh_token"]

[service-gator.gh]
read = true

This gives all pods read-only access to all GitHub data (repos, search, gists, GraphQL). See Service-gator Integration for write permissions and advanced configuration.

Editor integration via SSH

Each devaipod workspace runs an embedded SSH server, allowing you to connect with editors that support SSH remoting (Zed, VSCode, Cursor, etc.). This lets you interrupt an autonomous task and take manual control of the codebase.

To export SSH configs from the container to the host, bind-mount a directory to /run/devaipod-ssh when starting the daemon:

mkdir -p ~/.ssh/config.d/devaipod

# Add to the top of ~/.ssh/config:
# Include config.d/devaipod/*

Then add -v ~/.ssh/config.d/devaipod:/run/devaipod-ssh:Z to your podman run command. When this mount exists, devaipod automatically writes SSH configs there. You can then connect:

# Zed:
zed ssh://devaipod-<workspace>
# VSCode:
code --remote ssh-remote+devaipod-<workspace> /workspaces/<project>

The SSH connection goes to the workspace container, which has full access to credentials for manual development work.

Stopping and cleanup

The daemon runs sleep infinity by default. To stop it:

podman stop devaipod
podman rm devaipod

Workspace pods persist independently and continue running even if the devaipod container is stopped.

Architecture

┌─────────────────────────────────────────────────────────────┐
│ Host                                                        │
│  ┌─────────────────────┐                                    │
│  │ podman.sock         │◄──────────────────┐               │
│  └─────────────────────┘                   │               │
│                                            │               │
│  ┌─────────────────────┐     ┌─────────────┴─────────────┐ │
│  │ devaipod container  │     │ Workspace pod             │ │
│  │ (daemon)            │────►│ - {pod}-workspace         │ │
│  │                     │     │ - {pod}-agent             │ │
│  └─────────────────────┘     │ - {pod}-api (web UI,     │ │
│           ▲                  │     proxy, git/PTY)       │ │
│           │                  │ - {pod}-gator (optional)  │ │
│  Web UI :8080 / podman exec  │ - {pod}-worker (opt-in)  │ │
│  (primary: browser,          └───────────────────────────┘ │
│   also CLI/TUI)                                            │
└─────────────────────────────────────────────────────────────┘

Users interact through the control plane web UI at :8080, which is authenticated by default (a login token is generated on first start and printed to the container logs). The control plane manages pod lifecycle and embeds each pod's agent UI in an iframe. The pod-api sidecar is the only published port per pod (8090 internal, random host port); it serves the vendored opencode SPA, proxies to the opencode agent (port 4096, not published externally), and provides git/PTY endpoints. The opencode server itself requires Basic Auth with a per-pod password that the pod-api sidecar handles transparently.

The devaipod container uses podman-remote to communicate with the host's podman daemon via the mounted socket. This allows it to create "sibling" containers (workspace pods) that run alongside it on the host.

Limitations

  • Remote URLs only - only works with remote repository URLs, not local directories
  • No bind_home - the [bind_home] config option is not supported; use [trusted.secrets] instead

Building from source

To build the container image locally:

podman build -t ghcr.io/cgwalters/devaipod -f Containerfile .

The multi-stage Containerfile builds devaipod from source using CentOS Stream 10 and creates a minimal runtime image with podman-remote, git, tmux, and openssh-clients.

Next Steps

Configuration

devaipod is configured via ~/.config/devaipod.toml and per-project devcontainer.json files.

Global Configuration

Create ~/.config/devaipod.toml:

# Dotfiles repository - its devcontainer.json is used as a fallback
# when a project has no devcontainer.json of its own
[dotfiles]
url = "https://github.com/you/homegit"

# Global environment variables for all containers
[env]
# Forward these from host environment (if they exist)
allowlist = ["GOOGLE_CLOUD_PROJECT", "SSH_AUTH_SOCK", "VERTEX_LOCATION"]

# Set these explicitly
[env.vars]
VERTEX_LOCATION = "global"
EDITOR = "vim"

# Trusted environment variables (workspace + gator only, NOT agent)
[trusted.env]
allowlist = ["GH_TOKEN", "GITLAB_TOKEN", "JIRA_API_TOKEN"]

# Or use podman secrets (recommended)
[trusted]
secrets = ["GH_TOKEN=gh_token", "GITLAB_TOKEN=gitlab_token"]

# File-based secrets (mounted as files, env var points to path)
# Useful for credentials like gcloud ADC that expect a file path
file_secrets = ["GOOGLE_APPLICATION_CREDENTIALS=google_adc"]

# GPU passthrough (optional)
[gpu]
enabled = true  # or "auto" to detect
target = "all"  # or "workspace", "agent"

# Service-gator default configuration (optional)
[service-gator]
enabled = true
port = 8765

[service-gator.gh.repos]
"myorg/*" = { read = true }
"myorg/main-project" = { read = true, create-draft = true }

Using Without devcontainer.json

Not all repositories include a devcontainer.json. The recommended approach is to put a default devcontainer.json in your dotfiles repository. When a project has no devcontainer.json, devaipod automatically checks your dotfiles repo for one.

Dotfiles devcontainer.json (recommended)

Add a .devcontainer/devcontainer.json to your dotfiles repo (configured via [dotfiles] in devaipod.toml). This is the natural place for user-level defaults like your preferred image, nested container support, and lifecycle commands:

{
  "image": "ghcr.io/bootc-dev/devenv-debian",
  "customizations": {
    "devaipod": { "nestedContainers": true }
  },
  "runArgs": ["--privileged"],
  "postCreateCommand": {
    "devenv-init": "sudo /usr/local/bin/devenv-init.sh"
  }
}

The runArgs with --privileged keeps compatibility with the stock devcontainer CLI, while nestedContainers: true tells devaipod to use a tighter set of capabilities instead.

To force the dotfiles devcontainer.json even when a project has its own, use --use-default-devcontainer (or the checkbox in the web UI).

The resolution order is:

  1. --devcontainer-json inline override
  2. Project's devcontainer.json (skipped with --use-default-devcontainer)
  3. Dotfiles repo's devcontainer.json
  4. --image flag with default settings
  5. default-image from config with default settings

Other options

You can also specify --image per-invocation or set default-image in the config, but these only set the image without any lifecycle commands or customizations.

Git Hosting Providers

devaipod recognizes bare hostnames like github.com/owner/repo and automatically prepends https://. The built-in list covers GitHub, GitLab, Codeberg, Bitbucket, sr.ht, and Gitea. For private instances, add them via the [git] section:

[git]
extra_hosts = ["forgejo.example.com", "gitea.corp.internal"]

This lets you run devaipod up forgejo.example.com/team/project without typing the full URL. SSH URLs (git@host:owner/repo.git) are also automatically converted to HTTPS regardless of this setting.

Per-Project Configuration

Projects use standard devcontainer.json with optional devaipod customizations:

{
  "name": "my-project",
  "image": "ghcr.io/bootc-dev/devenv-debian:latest",
  "customizations": {
    "devaipod": {
      "envAllowlist": ["MY_API_KEY", "CUSTOM_TOKEN"]
    }
  }
}

Secrets in devcontainer.json

Declare secrets that should be injected from podman:

{
  "secrets": {
    "GEMINI_API_KEY": {
      "description": "API key for Google Gemini"
    },
    "ANTHROPIC_API_KEY": {
      "description": "API key for Claude"
    }
  }
}

Then create matching podman secrets:

echo "your-api-key" | podman secret create GEMINI_API_KEY -

Environment Variable Priority

Environment variables are merged in this order (later wins):

  1. Global [env] section in devaipod.toml
  2. Per-project containerEnv in devcontainer.json
  3. Per-project customizations.devaipod.envAllowlist
  4. Command-line --env flags

Service-gator CLI Flags

Override configuration with CLI flags:

# Read-only access to all GitHub repos
devaipod up https://github.com/org/repo --service-gator=github:readonly-all

# Read + draft PR access to specific repo
devaipod up https://github.com/org/repo --service-gator=github:myorg/myrepo:read,create-draft

# Custom image
devaipod up https://github.com/org/repo --service-gator=github:myorg/myrepo --service-gator-image localhost/service-gator:dev

See Service-gator Integration for full details.

Multi-Agent Orchestration

By default each workspace runs a single agent container. Multi-agent orchestration — where a worker container runs alongside the agent and receives delegated subtasks — is opt-in:

[orchestration]
enabled = true           # Create a worker container (default: false)
worker_timeout = "30m"   # Timeout for worker subtasks

[orchestration.worker]
# How the worker accesses service-gator
# Options: "readonly" (default), "inherit", "none"
gator = "readonly"

When enabled, the agent delegates subtasks to the worker and reviews its commits before merging.

Worker gator options:

  • "readonly": Worker can only read from forge (no PRs, no pushes) — default
  • "inherit": Worker gets same gator scopes as the agent
  • "none": Worker has no gator access

The worker is one step further from human review, so it has restricted access by default.

Design Philosophy

Mid-Level Infrastructure

devaipod is designed as mid-level infrastructure for AI coding workflows.

More opinionated than raw tools: Unlike running opencode or Claude Code directly, devaipod provides structure around sandboxing, credential isolation, and workspace lifecycle. You don't have to figure out container security yourself.

Less opinionated than full platforms: Unlike monolithic solutions (OpenHands Cloud, Cursor), devaipod focuses on the primitives and leaves room for building different workflows on top. The included web UI is the primary interface, but the pod abstraction supports any frontend — TUI, custom dashboards, or API-driven automation.

Composable building blocks: The pod abstraction and service-gator MCP are independent pieces. Use what you need, skip what you don't.

This design enables:

  • Custom control planes (the included web UI, TUI, or API-driven)
  • Integration with existing CI/CD and review workflows
  • Different human-in-the-loop patterns for different teams
  • Extension via MCP servers and external tooling

Security First

The fundamental design principle is that AI agents should have minimal access to credentials and external services. Rather than trusting the agent with your GitHub token, devaipod:

  1. Runs the agent in an isolated container without trusted credentials (GH_TOKEN, etc.)
  2. Routes external service access through service-gator, which enforces fine-grained scopes
  3. By default, only allows the agent to read repositories and create draft pull requests

This means a prompt injection attack or misbehaving agent cannot:

  • Push directly to your repositories
  • Access other repositories you have access to
  • Merge pull requests
  • Create non-draft PRs (which could trigger CI in surprising ways)

Human-in-the-Loop

devaipod is built for workflows where humans review AI-generated code before it becomes permanent. The default permissions (read + draft PR) reflect this: the agent can propose changes, but a human must mark them ready for merge.

This isn't about distrusting AI capabilities—it's about maintaining auditability and preventing automation failures from having outsized impact.

Web UI Architecture

The web UI is a vendored build of the opencode SPA, built from source in the Containerfile with VITE_DEVAIPOD=true. It is served by a pod-api sidecar container that runs alongside each agent pod.

The control plane handles pod lifecycle (create, start, stop, rebuild), authentication (cookie-based login), discovering each pod-api sidecar's published port, and serving the iframe wrapper with navigation. The /pods management page is an SPA route outside the opencode SDK provider stack.

Each pod's sidecar handles everything else: serving the SPA, proxying opencode API calls to localhost:4096 within the pod's network namespace, and providing git and PTY endpoints directly.

Browser → control plane:8080
  ├─ /pods                    Pod management page (SPA route)
  ├─ /_devaipod/agent/{name}/ Iframe wrapper (discovers pod-api port)
  └─ /api/devaipod/...        Pod lifecycle, agent status, proposals

Browser → pod-api:{port}      (via iframe, each pod has its own port/origin)
  ├─ /                        Vendored opencode SPA (index.html)
  ├─ /assets/*                SPA static files (JS, CSS, fonts)
  ├─ /git/*                   Git endpoints (direct process, no exec overhead)
  ├─ /pty/*                   Workspace PTY (WebSocket, bollard exec)
  ├─ /git/events              SSE stream (inotify-based git watcher)
  └─ /*                       Fallback: proxy to opencode at localhost:4096
                              (session, rpc, event, config, etc.)
                              with Basic auth, SSE keepalive for readiness

Each pod exposes only one published port (the pod-api sidecar at 8090 internal, random host port). The opencode server port (4096) is NOT published — the sidecar proxies to it internally. Since each pod runs on its own origin (different host port), localStorage is naturally isolated per pod.

Why we vendor the opencode UI

opencode serve does not serve its own web UI — non-API requests are proxied to https://app.opencode.ai. This is unsuitable for devaipod because cross-origin iframes are blocked by X-Frame-Options/CSP headers, the hosted UI would make API calls back to app.opencode.ai instead of the local backend, and air-gapped environments can't reach external services.

Vendoring the built SPA eliminates all three problems. The opencode SPA detects it's not on opencode.ai and uses window.location.origin for API calls, which on the pod-api sidecar routes to the correct opencode server automatically.

Pod Architecture

Each devaipod workspace is a podman pod containing several containers:

ContainerRole
workspaceUser's dev environment (from devcontainer image)
agentRuns the AI agent (opencode); has its own workspace copy
gatorservice-gator — fine-grained MCP server for GitHub/GitLab/Forgejo
apipod-api sidecar — HTTP server for git status, summary, completion status

All containers in a pod share the network namespace (localhost communication). The api container has a /healthz endpoint and a podman healthcheck configured.

Key source files

FilePurpose
src/main.rsCLI entry point and all subcommand handlers
src/pod.rsPod creation, container configs, volume management
src/pod_api.rsPod-api sidecar HTTP server (axum)
src/podman.rsPodman API abstraction, ContainerConfig, PodmanService
src/web.rsWeb UI server, proxy routes, auth
src/config.rsConfiguration types and loading
src/ssh_server.rsSSH server for exec --stdio connections

Volumes

Each pod creates up to 5 named volumes (suffixed with the pod name): workspace, agent-home, agent-workspace, worker-home, worker-workspace. The worker volumes are only created when orchestration mode is enabled.

Known issue: cmd_prune and prune_done_pods do not clean up volumes when removing pods. cmd_delete handles this correctly. This is a bug to fix.

Tracing

All log output goes to stderr (via tracing_subscriber with .with_writer(std::io::stderr)). This is important because some commands (e.g. exec --stdio, gator show --json) use stdout for structured data.

Sandboxing Model

Overview

devaipod isolates AI agents using podman pods with multiple containers. Using containerization by default ensures isolation (configurable to what you do in the devcontainer). An additional key security property is credential isolation: the agent container does not receive trusted credentials (GH_TOKEN, etc.), only LLM API keys. Service-gator (optional) controls access to remote services like JIRA, Gitlab, Github etc.

For implementation details, see the Rust module docs in src/pod.rs.

Defense in Depth

  1. Container isolation — The agent runs in a separate container from the workspace container.

  2. Credential isolation — The agent does NOT receive trusted credentials like GH_TOKEN, GITLAB_TOKEN, or JIRA_API_TOKEN. It only receives LLM API keys (ANTHROPIC_API_KEY, etc.). This is the primary security boundary. (The worker, when enabled, has the same restriction.)

  3. Isolated home directory — Each agent's $HOME is a separate volume that doesn't contain user credentials from the host.

  4. Authenticated endpoints — The control plane web UI (:8080) requires a login token for all API routes. The opencode server inside each pod requires Basic Auth with a randomly generated per-pod password. The pod-api sidecar acts as an authenticating proxy to opencode, so external clients never need to know the password directly. Unlike stock opencode serve, endpoints are not open by default.

Architecture

flowchart TB
    subgraph host[Host - rootless podman]
        subgraph pod[Podman Pod - shared network]
            workspace[Workspace Container<br/>Full dev env, GH_TOKEN]
            agent[Agent Container<br/>LLM keys only, isolated HOME]
            api[API Sidecar<br/>Web UI + proxy, port 8090]
            volume[(Shared Volume)]
            gator[Gator Container<br/>service-gator MCP]:::optional
            worker[Worker Container<br/>LLM keys only]:::optional
        end
    end
    workspace <-->|mount| volume
    agent <-->|mount| volume
    api -->|proxy :4096| agent
    agent -->|MCP :8765| gator
    agent -->|delegate :4098| worker
    worker <-->|mount| volume
    worker -.->|MCP| gator
    gator -->|scoped| github[GitHub API]
    classDef optional stroke-dasharray: 5 5

Gator is enabled when service-gator scopes are configured. Worker is enabled via [orchestration] enabled = true; the worker's access to gator is configurable via [orchestration.worker] gator (default: readonly).

Container Security

Workspace Container

  • Runs your devcontainer image with full privileges
  • Has access to your dotfiles, credentials, and environment (GH_TOKEN, GITLAB_TOKEN, etc.)
  • Can run privileged operations (build, test, deploy)
  • Functions as a full development environment for human use
  • Contains opencode-connect shim that connects to the agent

Agent Container ({pod}-agent)

  • Same devcontainer image with the same Linux capabilities as workspace (to support nested containers)
  • Runs opencode serve on port 4096
  • Credential isolation: Receives only LLM API keys (ANTHROPIC_API_KEY, OPENAI_API_KEY, etc.)
  • Does NOT receive trusted credentials (GH_TOKEN, etc.) — accesses external services only via service-gator when available
  • Isolated home directory (separate volume)
  • Has read/write access to its own workspace clone ({pod}-agent-workspace volume)
  • Read-only access to the main workspace via /mnt/main-workspace

API Sidecar ({pod}-api)

  • Serves the vendored opencode SPA (embedded by the control plane via iframe)
  • Proxies to the agent's opencode at localhost:4096
  • Provides git and PTY endpoints
  • Only published port per pod (8090 internal); the control plane at :8080 is the primary user-facing entry point

Gator Container (optional)

  • Enabled when service-gator scopes are configured
  • Runs service-gator MCP server
  • Receives trusted credentials (GH_TOKEN, JIRA_API_TOKEN)
  • Provides scope-restricted access to external services
  • Agent (and worker, if present) connect via MCP protocol, never see raw credentials

Worker Container (optional, [orchestration] enabled = true)

  • Same devcontainer image with the same Linux capabilities
  • Runs opencode serve on port 4098
  • Executes subtasks delegated by the agent (which becomes "task owner" when orchestration is enabled)
  • Credential isolation: Receives only LLM API keys — accesses external services only via service-gator
  • Isolated home directory (separate from agent)
  • Has its own workspace clone for isolated git operations

Volume Strategy

Workspace code is cloned into a podman volume (not bind-mounted from host):

  • Volume name: {pod_name}-workspace
  • Benefits: Avoids UID mapping issues with rootless podman
  • Access: Workspace and agent containers mount this volume (worker also mounts it when orchestration is enabled)

Environment Variable Isolation

Environment variables are carefully partitioned:

Variable TypeWorkspaceAgentAPIWorker (opt-in)Gator (opt-in)
LLM API keys (ANTHROPIC_API_KEY, etc.)
Trusted env (GH_TOKEN, etc.)
Global env allowlist
Project env allowlist

The workspace container has full access to trusted credentials, making it suitable for human development work. The agent (and worker, when enabled) are credential-isolated and must use service-gator for any external service access.

Configure trusted environment variables in ~/.config/devaipod.toml:

[trusted.env]
allowlist = ["GH_TOKEN", "GITLAB_TOKEN", "JIRA_API_TOKEN"]

Known Limitations

  1. Workspace file access: The agent (and worker, if enabled) can read/write any file in their respective workspaces. Secrets in .env files are visible.

  2. Network access: All containers have full network access within the pod's shared network namespace.

  3. Same image requirement: The agent (and worker) containers use the same image as the workspace. OpenCode must be installed in your devcontainer image.

External Service Access

For operations requiring access to external services (GitHub, JIRA, etc.), agents use the integrated service-gator MCP server which provides scope-based access control.

See Service-gator Integration for full documentation.

Agent Workspace Isolation

Overview

The AI agent operates in an isolated workspace separate from the human's working tree. The agent cannot modify files in the human's workspace directly—changes must be explicitly pulled by the human after review.

By default, a pod has three containers (workspace, agent, and pod-api sidecar) but two git working trees: the human's and the agent's. Orchestration (task owner + worker) is opt-in via [orchestration] enabled = true.

This isolation prevents:

  • Accidentally running AI-generated code before review
  • Prompt injection attacks that could modify your working files
  • Unintentional changes to your development environment

The human always has full control over when and how agent changes are incorporated.

Architecture

Every pod contains containers that share git objects but maintain isolated working trees. The default pod has 3 containers:

┌──────────────────────────────────────────────────────────────┐
│                        devaipod Pod                          │
├─────────────────────┬─────────────────────┬──────────────────┤
│  Workspace          │  Agent ({pod}-agent) │  API ({pod}-api) │
│                     │                      │                  │
│  /workspaces/...    │  /workspaces/...     │  Web UI + proxy  │
│  (human's tree)     │  (agent's tree)      │  port 8090       │
│                     │                      │                  │
│  /mnt/main-workspace│  /mnt/main-workspace │                  │
│  (for git alternates)│ (readonly)          │                  │
│                     │                      │                  │
│  /mnt/agent-workspace│                     │                  │
│  (readonly)         │                      │                  │
└─────────────────────┴──────────────────────┴──────────────────┘

Optional containers:

  • Gator — enabled when service-gator scopes are configured. Provides scoped access to external services (GitHub, JIRA, etc.) via MCP.
  • Worker — enabled via [orchestration] enabled = true. When present, the agent becomes the "task owner" and delegates subtasks to the worker.

Volume mounts (default)

ContainerPathSourceAccess
Workspace/workspacesmain workspace volumeread-write
Workspace/mnt/main-workspacemain workspace volumeread-only
Workspace/mnt/agent-workspaceagent workspace volumeread-only
Agent/workspacesagent workspace volumeread-write
Agent/mnt/main-workspacemain workspace volumeread-only

The cross-mounts are read-only, so neither container can modify the other's working tree.

Note: The workspace container mounts the main volume at both /workspaces (read-write) and /mnt/main-workspace (read-only). This allows git fetch agent to work correctly—the agent's clone uses --shared which creates an alternates file referencing /mnt/main-workspace, and this path must exist in both containers.

Orchestration mounts (when enabled): The worker gets its own workspace clone at /workspaces and the agent (task owner) additionally mounts /mnt/worker-workspace (read-only) for reviewing worker commits.

Git object sharing

To avoid duplicating repository data, the agent's workspace is cloned using git clone --shared. This creates a .git/objects/info/alternates file that references the main workspace's git objects.

Benefits:

  • Near-instant clone time (no network fetch needed)
  • Minimal disk space overhead (objects shared, not copied)
  • Full git functionality (the agent can commit, branch, etc.)

The agent's clone shares objects from /mnt/main-workspace, which contains the human's repository.

Commands

Connect to the agent (default):

devaipod attach <name>

Connect to workspace container for manual work:

devaipod attach <name> -W

Create a pod and auto-start the agent on a task:

devaipod run <repo> "fix the bug in auth.rs"

Get a shell in the agent container:

devaipod exec <name>

Get a shell in the workspace container:

devaipod exec <name> -W

Connect to the worker agent (requires [orchestration] enabled = true):

devaipod attach <name> --worker

Get a shell in the worker container (requires orchestration):

devaipod exec <name> --worker

Git remotes

Devaipod sets up consistent git remote names across all containers.

Source repository remotes

RemoteDescription
originThe main upstream repository (where PRs merge to, the source of truth)
forkThe user's fork of the upstream repository (auto-detected via GitHub API when a GH_TOKEN is available, or set from the PR author's fork when working on a PR from a fork)

Cross-container collaboration remotes (default)

ContainerRemotePoints to
WorkspaceagentAgent's workspace
AgentworkspaceHuman's workspace

These remotes are set up automatically when the pod starts—no manual configuration needed.

Orchestration remotes (when enabled)

When orchestration is active, additional remotes are configured:

ContainerRemotePoints to
Agent (task owner)workerWorker's workspace
WorkerownerTask owner's workspace

The task owner fetches from the worker, reviews commits, and merges them before pushing to origin or creating a PR.

Workflow: Reviewing agent changes

The agent commits changes to its isolated workspace. To incorporate those changes into your working tree, use standard git operations from the workspace container.

First, connect to the workspace container:

devaipod attach <name> -W
# or
devaipod exec <name> -W

The agent remote is already configured. Review and pull changes:

# Fetch agent's commits
git fetch agent

# See what the agent committed
git log agent/HEAD

# Review the diff
git diff HEAD..agent/HEAD

# Apply specific commits
git cherry-pick <commit>

# Or merge all agent changes
git merge agent/HEAD

Workflow: Agent continues from human changes

When the human makes changes and wants the agent to continue from that point:

  1. Human makes commits in the workspace container
  2. Agent fetches from the pre-configured workspace remote:
# In the agent container (or via opencode)
git fetch workspace
git rebase workspace/HEAD
# or
git merge workspace/HEAD

This enables iterative collaboration loops:

  1. Agent works on task, makes commits
  2. Human reviews via git fetch agent, cherry-picks or edits
  3. Agent fetches human's changes via git fetch workspace, continues
  4. Repeat

Security properties

This isolation model provides defense-in-depth:

  1. Write isolation: The agent cannot modify your working tree. Any file changes require explicit git fetch + merge/cherry-pick.

  2. Commit review: You see exactly what the agent changed before incorporating it. Use git diff and git log to review.

  3. Selective adoption: Cherry-pick individual commits or reject changes entirely. You're not forced to accept everything.

  4. Credential isolation: Combined with sandboxing, the agent also lacks access to your GH_TOKEN and other credentials.

Comparison with direct access

Without workspace isolation, the agent would have direct read-write access to your files. This means:

ScenarioWith isolationWithout isolation
Agent writes buggy codeReview before mergeCode already in your tree
Prompt injection attackCannot modify your filesCould delete/corrupt files
Agent makes unexpected changesVisible in git diffMay not notice immediately
Reverting agent workDon't merge itManual git reset required

Workspace isolation means you always opt-in to agent changes rather than having to opt-out.

Secret Management

Implementation details: See src/secrets.rs and src/pod.rs

Overview

devaipod carefully partitions environment variables between containers to keep credentials secure. LLM API keys go to the agent, but trusted credentials (like GH_TOKEN) stay in workspace and gator containers only.

For trusted credentials like GH_TOKEN, podman secrets provide better security than environment variables:

  • Secrets don't appear in podman inspect or process listings
  • Uses podman's native type=env feature to set environment variables directly
  • Secrets are managed separately from container config

Setup

  1. Create podman secrets for your credentials:

    echo -n "ghp_xxxxxxxxxxxx" | podman secret create gh_token -
    echo -n "glpat-xxxx" | podman secret create gitlab_token -
    
    # Verify
    podman secret ls
    
  2. Configure ~/.config/devaipod.toml:

    [trusted]
    # Use podman secrets with type=env (secrets become env vars directly)
    # Format: "ENV_VAR_NAME=secret_name"
    secrets = ["GH_TOKEN=gh_token", "GITLAB_TOKEN=gitlab_token"]
    

How It Works

When devaipod starts:

  1. devaipod passes --secret gh_token,type=env,target=GH_TOKEN to podman
  2. Podman reads the secret value and sets GH_TOKEN directly as an environment variable
  3. Tools like gh, glab, etc. can use the credentials normally

This approach keeps secrets out of the container environment and process listings while using podman's built-in environment variable injection.

File-based Secrets

Some credentials need to be available as files rather than environment variables. Use file_secrets for this:

[trusted]
file_secrets = ["GOOGLE_APPLICATION_CREDENTIALS=google_adc"]

This mounts the podman secret as a file at /run/secrets/google_adc and sets GOOGLE_APPLICATION_CREDENTIALS=/run/secrets/google_adc.

podman secret create google_adc ~/.config/gcloud/application_default_credentials.json

LLM API Keys (devcontainer.json)

  1. Declare secrets in devcontainer.json:

    {
      "secrets": {
        "GEMINI_API_KEY": {
          "description": "API key for Google Gemini"
        },
        "ANTHROPIC_API_KEY": {
          "description": "API key for Claude"
        }
      }
    }
    
  2. Create matching podman secrets on your host:

    echo "your-gemini-key" | podman secret create GEMINI_API_KEY -
    echo "sk-ant-xxx" | podman secret create ANTHROPIC_API_KEY -
    
    # Verify
    podman secret ls
    
  3. Run devaipod - secrets are automatically:

    • Read from devcontainer.json secrets field
    • Fetched from podman via podman secret inspect --showsecret
    • Injected into the appropriate containers

Alternative Methods

Vertex AI / gcloud ADC

For Google Cloud Vertex AI, use file_secrets to mount your application default credentials:

podman secret create google_adc ~/.config/gcloud/application_default_credentials.json
[trusted]
file_secrets = ["GOOGLE_APPLICATION_CREDENTIALS=google_adc"]

[env.vars]
GOOGLE_CLOUD_PROJECT = "your-project-id"

Note: devcontainer.json mounts are parsed but not yet wired into container creation. Use file_secrets or [env] for credentials that need to reach containers.

Environment Variables

Pass directly via containerEnv:

{
  "containerEnv": {
    "GEMINI_API_KEY": "${localEnv:GEMINI_API_KEY}"
  }
}

Dotfiles

Configure in your dotfiles repo (e.g., ~/.config/opencode/opencode.json).

What Gets Forwarded to Agent Container

The agent container receives LLM API keys but NOT trusted credentials:

Variable TypeWorkspaceAgentGator
ANTHROPIC_API_KEY
OPENAI_API_KEY
GEMINI_API_KEY
GH_TOKEN
GITLAB_TOKEN
Global env allowlist

Trusted Environment Variables

Configure which credentials go to workspace and gator (but NOT agent) in ~/.config/devaipod.toml:

[trusted.env]
# These env vars go to workspace and gator containers only
allowlist = ["GH_TOKEN", "GITLAB_TOKEN", "JIRA_API_TOKEN"]

# Or set explicit values
[trusted.env.vars]
GH_TOKEN = "ghp_xxxxxxxxxxxx"

Global Environment Variables

Configure variables that go to ALL containers (including agent):

[env]
# Forward from host environment
allowlist = ["GOOGLE_CLOUD_PROJECT", "SSH_AUTH_SOCK", "VERTEX_LOCATION"]

# Set explicit values
[env.vars]
VERTEX_LOCATION = "global"

GitHub Token

GH_TOKEN is intentionally NOT forwarded to the agent. For GitHub operations, agents should use MCP servers like service-gator which run in a separate container with appropriate scope restrictions.

See Service-gator Integration for details.

Service-gator Integration

Overview

service-gator is an MCP server that provides scope-restricted access to external services (GitHub, JIRA, GitLab) for AI agents. It runs in a separate gator container alongside the workspace and agent containers, providing a security boundary between the sandboxed AI agent and your external credentials.

Architecture

┌──────────────────────────────────────────────────────────────────────────┐
│  Podman Pod                                                               │
│                                                                           │
│  ┌──────────────────┐  ┌──────────────────┐  ┌──────────────────┐        │
│  │ Workspace         │  │ Gator Container  │  │ Pod-api Sidecar  │        │
│  │ • Full dev env    │  │ • service-gator  │  │ • Serves web UI  │        │
│  │ • Has GH_TOKEN    │  │ • Has GH_TOKEN   │  │ • Proxies to     │        │
│  │ • (trusted)       │  │ • Scope-restrict │  │   agent (4096)   │        │
│  └──────────────────┘  └────────┬─────────┘  └──────────────────┘        │
│                                 │ MCP (HTTP)                              │
│  ┌──────────────────────────────┼───────────────────────────────────────┐ │
│  │ Agent Container (restricted) │                                       │ │
│  │ • opencode serve             │                                       │ │
│  │ • NO GH_TOKEN                │                                       │ │
│  │ • Connects to gator via MCP ─┘                                       │ │
│  │ • Same capabilities as workspace (supports nested containers)        │ │
│  └──────────────────────────────────────────────────────────────────────┘ │
└──────────────────────────────────────────────────────────────────────────┘

For most users, the recommended configuration is global read-only access to all GitHub repos. This allows the AI agent to browse code, read PRs/issues, and understand context across repositories while preventing any write operations.

First, create a podman secret for your GitHub token (one-time setup):

echo 'ghp_your_token_here' | podman secret create gh_token -

Then add this to ~/.config/devaipod.toml:

# Use podman secrets to provide GH_TOKEN to service-gator (but NOT to the agent)
[trusted]
secrets = ["GH_TOKEN=gh_token"]

# Enable service-gator with global read-only GitHub access
[service-gator.gh]
read = true

With this configuration, every devaipod up or devaipod run will automatically have service-gator enabled with read access to all GitHub repos. The agent can:

  • Read repository contents, PRs, issues, and comments via gh api repos/OWNER/REPO/...
  • Understand cross-repo dependencies

But the agent cannot:

  • Push code or create branches
  • Create, merge, or close PRs
  • Comment on issues or PRs
  • Modify any repository state

This is the safest default for productive AI-assisted development.

The read = true setting enables:

  • All /repos/OWNER/REPO/... endpoints (any owner/repo)
  • Non-repo endpoints: /search/..., /gists/..., /user/..., /orgs/...
  • GraphQL queries (implicitly enabled)

This is the most permissive read-only configuration. The agent can browse any public/accessible GitHub data but cannot modify anything.

Adding Write Access for Specific Repos

You can layer additional permissions on top of the global readonly:

[service-gator.gh.repos]
# Global read-only baseline
"*/*" = { read = true }

# Allow draft PRs for your main projects
"myorg/frontend" = { read = true, create-draft = true }
"myorg/backend" = { read = true, create-draft = true }

Quick Start (CLI)

For one-off usage or overriding the config, use command-line flags:

# Read-only access to all GitHub repos
devaipod up https://github.com/org/repo --service-gator=github:readonly-all

# Read access to specific repos
devaipod up https://github.com/org/repo --service-gator=github:myorg/myrepo

# Read access to all repos in an org
devaipod up https://github.com/org/repo --service-gator=github:myorg/*

# Write access to a specific repo
devaipod up https://github.com/org/repo --service-gator=github:myorg/myrepo:write

# Multiple scopes
devaipod up https://github.com/org/repo \
  --service-gator=github:myorg/frontend \
  --service-gator=github:myorg/backend:write

Using a Custom service-gator Image

By default, devaipod pulls ghcr.io/cgwalters/service-gator:latest. To use a locally-built or custom image:

# Use a local development build
devaipod up https://github.com/org/repo --service-gator=github:myorg/myrepo --service-gator-image localhost/service-gator:dev

# Use a specific version
devaipod up https://github.com/org/repo --service-gator=github:myorg/myrepo --service-gator-image ghcr.io/cgwalters/service-gator:v0.2.0

This is useful for testing local changes to service-gator or pinning to a specific version.

CLI Scope Format

--service-gator=SERVICE:TARGET[:PERMISSIONS]
  • SERVICE: github (or gh), gitlab (future), jira (future)
  • TARGET: Repository pattern like owner/repo or owner/*, or special keyword like readonly-all
  • PERMISSIONS: Comma-separated list (default: read)
    • read - Read-only access
    • create-draft - Create draft PRs
    • pending-review - Manage pending PR reviews
    • write - Full write access

Configuration File

For persistent configuration, use ~/.config/devaipod.toml. Here's a complete example:

# Podman secrets for credentials - forwarded to workspace and gator containers
# but NOT to the agent container. Format: "ENV_VAR=secret_name"
# Create secrets with: echo 'token' | podman secret create secret_name -
[trusted]
secrets = ["GH_TOKEN=gh_token", "GITLAB_TOKEN=gitlab_token", "JIRA_API_TOKEN=jira_token"]

# RECOMMENDED: Global read-only access to all GitHub data
# Enables: all repos, /search, /gists, /user, GraphQL
[service-gator.gh]
read = true

# Optional: Add write permissions for specific repos
[service-gator.gh.repos]
# Read + create draft PRs for specific repos you actively develop
"myorg/main-project" = { create-draft = true }

# Read + manage pending PR reviews (for AI code review workflows)
"myorg/reviewed-repo" = { pending-review = true }

# Full write access (use sparingly - only for highly trusted workflows)
# "myorg/trusted-repo" = { write = true }

# PR-specific grants (typically set dynamically via CLI)
# [service-gator.gh.prs]
# "myorg/repo#42" = { write = true }

# JIRA project permissions (if you use JIRA)
# [service-gator.jira.projects]
# "MYPROJ" = { read = true, create = true }

Note: The [service-gator] enabled = true setting is optional - service-gator is auto-enabled when any scopes are configured.

Trusted Environment Variables

The [trusted.env] section is critical for service-gator to work:

[trusted.env]
# These env vars are forwarded ONLY to workspace and gator containers
# The AI agent container does NOT receive these - it must go through service-gator
allowlist = ["GH_TOKEN", "GITLAB_TOKEN", "JIRA_API_TOKEN"]

# You can also set explicit values
[trusted.env.vars]
GH_TOKEN = "ghp_xxxxxxxxxxxx"

This ensures credentials are available to service-gator but not directly accessible by the AI agent.

For better security, use podman secrets instead of environment variables. Secrets don't appear in podman inspect or process listings, and podman's type=env feature sets them directly as environment variables.

  1. Create a podman secret:

    echo -n "ghp_xxxxxxxxxxxx" | podman secret create gh_token -
    
  2. Configure ~/.config/devaipod.toml:

    [trusted]
    # Use podman secrets with type=env (secrets become env vars directly)
    # Format: "ENV_VAR_NAME=secret_name"
    secrets = ["GH_TOKEN=gh_token", "GITLAB_TOKEN=gitlab_token"]
    
  3. devaipod passes --secret gh_token,type=env,target=GH_TOKEN to podman. The GH_TOKEN environment variable is set directly from the secret value.

See Secret Management for more details on this approach.

Permission Levels

GitHub

PermissionDescription
readView PRs, issues, code, run status, etc.
create-draftCreate draft PRs only (safer for review workflows)
pending-reviewCreate, update, and delete pending PR reviews
writeFull access (merge, close, create non-draft PRs, etc.)

JIRA

PermissionDescription
readView issues, projects, search
createCreate new issues
writeFull access (update, transition, comment, etc.)

Pattern Matching

Repository patterns support trailing wildcards:

  • owner/repo - Exact match
  • owner/* - All repos under owner
  • More specific patterns take precedence over wildcards

How It Works

When you run devaipod up:

  1. devaipod parses CLI --service-gator flags and merges with ~/.config/devaipod.toml
  2. If service-gator is enabled, devaipod creates a pod with:
    • workspace container: Full dev environment with trusted env vars (GH_TOKEN, etc.)
    • gator container: Runs service-gator with scopes and trusted env vars
    • pod-api sidecar: Serves the web UI, proxies to the agent, provides git/PTY endpoints
    • agent container: Runs opencode serve with NO trusted env vars, configured to use gator MCP
  3. The agent can use GitHub/JIRA tools via MCP, but only with the configured scopes
  4. Credentials never reach the agent - they stay in the trusted containers

Requirements

  • GH_TOKEN must be available to the gator container — configure via [trusted] secrets (recommended) or [trusted.env] in devaipod.toml
  • For JIRA, JIRA_API_TOKEN should be in [trusted.env]

The service-gator container image (ghcr.io/cgwalters/service-gator) is automatically pulled.

Security Benefits

  1. Credential Isolation: API tokens are in workspace/gator containers only; the agent never sees them
  2. Container Separation: Agent runs in a separate container (same Linux capabilities as workspace to support nested containers, but no trusted credentials)
  3. Fine-grained Scoping: Grant exactly the permissions needed via CLI or config
  4. MCP Protocol: Agent communicates with external services only through the MCP interface

See Also

OpenCode Integration

Overview

OpenCode is an open-source TUI for AI coding agents. devaipod runs OpenCode in a sandboxed agent container within a podman pod.

Installation

OpenCode must be available in your devcontainer image. The ghcr.io/bootc-dev/devenv-debian base image comes with OpenCode pre-installed.

Configuration

OpenCode is configured via ~/.config/opencode/opencode.json. Set this up in your dotfiles:

{
  "$schema": "https://opencode.ai/config.json",
  "model": "google-vertex-anthropic/claude-sonnet-4-20250514"
}

Supported Providers

ProviderModel ExampleEnv Vars Needed
Vertex AIgoogle-vertex-anthropic/claude-sonnet-4-20250514GOOGLE_CLOUD_PROJECT + gcloud ADC
Anthropicanthropic/claude-sonnet-4-20250514ANTHROPIC_API_KEY
Google Geminigoogle/gemini-2.0-flashGEMINI_API_KEY
OpenAIopenai/gpt-4oOPENAI_API_KEY

Usage with devaipod

# Create workspace and get a shell
devaipod up https://github.com/org/repo -S
# Then run 'opencode-connect' inside the workspace to connect to the agent

# Create workspace with a task for the agent
devaipod up https://github.com/org/repo "fix the type errors in main.rs"

# Run agent on a GitHub issue (issue URL is parsed, default task: "Fix <url>")
devaipod run https://github.com/org/repo/issues/123

# Attach to the agent in a running workspace
devaipod attach myworkspace

Architecture

devaipod uses a podman pod with multiple containers:

┌──────────────────────────────────────────────────────────────────────────┐
│  Podman Pod (shared network namespace)                                    │
│                                                                           │
│  ┌──────────────────┐  ┌──────────────────┐  ┌──────────────────┐        │
│  │ Workspace         │  │ Agent             │  │ Pod-api Sidecar  │        │
│  │ • Full dev env    │  │ • opencode serve  │  │ • Serves opencode│        │
│  │ • opencode-connect│  │ • Port 4096       │  │   SPA (vendored) │        │
│  │ • Your dotfiles   │  │ • Isolated $HOME  │  │ • Proxies to     │        │
│  └──────────────────┘  └──────────────────┘  │   agent (4096)   │        │
│          │                       ▲             │ • Git/PTY APIs   │        │
│          │  attach (TUI) ────────┘             └────────┬─────────┘        │
│          │                                              │ port 8090        │
└──────────│──────────────────────────────────────────────│──────────────────┘
           │                                              │
    opencode-connect                          Control plane (:8080)
    (terminal attach)                         embeds via iframe

The primary interface is the control plane web UI at :8080, which manages pods and embeds each pod's agent view in an iframe. The pod-api sidecar serves the vendored opencode SPA and proxies API calls to localhost:4096 within the pod. See design.md for details on the architecture.

The workspace container also has an opencode-connect shim that runs opencode attach for terminal-based access, automatically continuing any existing session. All containers share the same network namespace via the pod.

Agent Support

Currently only OpenCode is supported as the AI agent. The agent container runs opencode serve and the workspace connects via opencode attach.

Contributing

The canonical contributing guide is CONTRIBUTING.md in the repository root.

Debugging

This guide covers common debugging techniques for devaipod pods and containers.

Quick Diagnostics with devaipod debug

The fastest way to diagnose issues is using the built-in debug command:

devaipod debug <workspace>

This checks:

  • Pod state and project info
  • Gator container: version, mount type, git accessibility
  • Agent container: health, MCP configuration
  • MCP connectivity between agent and gator

Example output showing a problem:

=== Pod Debug: devaipod-myproject-abc123 ===

State: Running
Project: myproject

--- Gator Container ---
  Present: yes
  Version: service-gator 0.2.0
  Workspace mount: none (read-write)
  Git accessible: NO - check mount!

--- Agent Container ---
  Health: healthy
  MCP configured: yes

--- MCP Connectivity ---
  Gator reachable from agent: yes

The "Git accessible: NO" indicates the gator can't see the workspace—likely a mount configuration issue.

Use --json for machine-readable output:

devaipod debug <workspace> --json

Manual Inspection

For deeper investigation, you can inspect pods and containers directly.

Inspecting Pods and Containers

List running pods:

podman pod ls

Check pod labels (useful for finding devaipod-managed pods):

podman pod inspect <pod> | jq '.[0].Labels'

Check a container's command:

podman inspect <container> | jq '.[0].Config.Cmd'

Check container mounts:

podman inspect <container> | jq '.[0].Mounts'

Service-gator Issues

Verifying Mounts

The gator container needs the workspace volume mounted correctly (as a named volume, not a bind mount from a temp directory). To check:

podman inspect <pod>-gator | jq '.[0].Mounts'

Look for the workspace mount—it should reference the pod's volume, not a host temp path.

Checking Git Repository Access

If git_push_local fails with "Not a git repository", the gator can't see the workspace:

podman exec <pod>-gator ls -la /workspaces/<project>/.git

This should show the .git directory contents. If it fails, the volume mount is misconfigured.

Testing Local Gator Builds

To test a locally-built service-gator image:

devaipod up . --service-gator=github:myorg/myrepo --service-gator-image localhost/service-gator:latest

MCP Connection Debugging

The agent talks to service-gator via localhost (they share a pod network namespace).

Check MCP status from the agent container:

podman exec <pod>-agent opencode mcp list

Test basic connectivity to the gator:

podman exec <pod>-agent curl -s http://localhost:8765/

Using opencode-connect

The workspace container includes opencode-connect, a script that connects to the agent. The agent listens on localhost:4096, and the gator listens on localhost:8765.

Common Issues

SymptomLikely CauseFix
"Not a git repository" from git_push_localGator can't see workspaceCheck volume mounts on gator container
"Permission denied" on workspace filesSELinux or wrong mount typeEnsure :z label on bind mounts, or use volumes
Old service-gator behaviorCached old imageUse --service-gator-image to specify version
MCP tools not availableGator not running or misconfiguredCheck podman ps and verify gator container is up

See Also

devaipod Roadmap

Priorities may shift based on user feedback and practical experience.

Recently Completed

  • SSH server for editor connections: Embedded Rust SSH server using the russh crate. Supports exec, shell, PTY, and port forwarding. SFTP scaffolded but not yet fully implemented.
  • Pod-api sidecar: HTTP API sidecar per pod, serving the vendored opencode web UI and proxying agent API calls. Primary interface for the web UI.

In Progress / Near-term

  • Agent completion detection: Partially implemented via the /summary endpoint in pod-api. Still needs full idle-state detection for run mode.
  • Git state awareness: Detect and warn about unpushed commits in the workspace
  • Agent readiness probes: Partially implemented via pod-api health checks. Needs refinement for detecting when the agent is truly ready to accept connections.
  • Agent container image strategy: Options for opencode installation (dedicated image, runtime install, sidecar)

Future / Ideas

Larger features under consideration:

  • Network isolation: Configure podman-level network settings to restrict agent network access
  • LLM credential isolation: Proxy container (possibly service-gator) that holds LLM API keys, so the agent doesn't have direct credential access
  • Kubernetes support: Use kube-rs to create pods on real Kubernetes clusters for remote dev environments
  • Quadlet/systemd integration: Generate Quadlet units for proper lifecycle management
  • Local Forgejo instance: Git caching, local CI/CD, and code review UI (see forgejo-integration.md)
  • Nested devaipods: MCP tool allowing agents to spawn additional sandboxed environments
  • Worker orchestration API: MCP tools or OpenCode skill for task owner to programmatically assign subtasks to worker (see worker-orchestration-api.md)
  • Devcontainer features support: Install devcontainer features into the workspace image
  • Multi-project workspaces: Support for monorepos or multi-repo setups
  • Persistent agent state: Named volumes for agent home so context persists across pod restarts
  • Bot/assistant accounts: OAuth2 apps with "on behalf of" authentication instead of PATs

Known Limitations

  • Agent requires opencode in the image: The agent container runs opencode serve, so opencode must be installed in the devcontainer image
  • Lifecycle commands only run in workspace: onCreateCommand etc. run in the workspace container, not the agent container
  • Single agent type: Only opencode is currently tested

Internals

Crates

To build the rustdoc documentation locally:

cargo doc --workspace --no-deps --document-private-items

Key UI source files

For the core Rust source files, see Architecture.

FilePurpose
opencode-ui/packages/app/src/context/devaipod.tsxPod management context
opencode-ui/packages/app/src/pages/pods.tsxPod management page
opencode-ui/packages/app/src/context/workspace-terminal.tsxWorkspace PTY client
opencode-ui/packages/app/src/pages/session/git-review-tab.tsxGit diff review
opencode-ui/packages/app/src/pages/session/terminal-panel.tsxAgent/Workspace terminal tabs
opencode-ui/packages/app/src/utils/devaipod-api.tsisDevaipod(), apiFetch, error reporting

Testing

Rust unit tests (cargo test): ~274 tests covering web.rs routing, proxy behavior, pod configuration, git operations. Run via just test-container.

Bun unit tests (bun test with happy-dom): 46 existing test files. Covers devaipod-specific modules like utils/devaipod-api.ts (apiFetch, error reporting), context/workspace-terminal.tsx (session lifecycle), and pages/session/terminal-label.ts (kind prefix formatting).

Rust integration tests (cargo test -p integration-tests): verify HTTP endpoints, auth, static files, and proxying using curl inside a running devaipod container.

Playwright E2E tests (bun test:e2e): 33 existing specs. For devaipod features, the SPA can be served directly from the pod-api sidecar — no cookie injection needed since VITE_DEVAIPOD=true enables all devaipod code paths at build time.

Notable discoveries

  • exec_in_container has ~200-500ms overhead per call through the podman VM on macOS — this motivated creating the pod-api sidecar.
  • SELinux is enforcing on the podman machine VM; the api container needs label=disable for the podman socket.
  • GlobalSDKProvider does NOT react to URL changes — it reads server.url once at init time. This is why iframe removal is deferred (see the todo).
  • SolidJS createEffect reactive tracking — async functions reading store properties inside createEffect cause accidental tracking loops; must wrap in untrack().
  • Each pod on its own origin naturally isolates localStorage, eliminating the need for monkey-patching approaches.

Related Projects

The AI coding agent space is evolving rapidly. This page compares devaipod to related projects, with emphasis on licensing and cloud dependencies.

For broader context on the state of agentic AI coding tools, see Thoughts on agentic AI coding as of Oct 2025.

Comparison Table

ProjectLicenseLocal-only?Notes
devaipodApache-2.0/MITYesNo cloud services required
Docker AI SandboxesProprietaryYesMicroVM isolation, Docker Desktop required
NVIDIA OpenShellApache-2.0YesDocker-based sandboxing with gateway control plane, Landlock/seccomp, policy-driven egress
nonoApache-2.0YesOS-level sandboxing (Landlock/Seatbelt), agent-agnostic
OpenHandsMITYesSelf-hostable, Docker-based
Ambient CodeMITYesKubernetes-native, self-hosted
paudeMITYesPodman + OpenShift backends, agent-agnostic
KortexApache-2.0YesDesktop GUI, AI + container/K8s management, Goose integration
GastownMITYesMulti-agent orchestration, no sandboxing
GyreNo licenseYesBuilt-in forge + agent orchestration platform
gjollApache-2.0YesCloud VM sandboxes via OpenTofu, credential-injecting reverse proxy
krunaiApache-2.0YesMicroVM, but not container oriented
Auto-ClaudeAGPL-3.0YesDesktop app, no sandboxing
ContinueApache-2.0PartialCLI is local; "Mission Control" cloud is proprietary
SWE-agentMITPartialCore is open; depends on Daytona cloud for some features
OnaProprietaryNoCloud service, not open source
CursorProprietaryNoCommercial product
Claude Code WebProprietaryNoAnthropic-hosted, sandboxed but not open source

Basic Agent Frameworks

These are the "raw" agent tools that devaipod can wrap with sandboxing. They run directly on your machine with full access to your filesystem and credentials.

OpenCode

OpenCode is the primary agent framework used by devaipod. Apache-2.0 licensed. It provides a TUI and a server mode that devaipod uses for sandboxed execution.

Claude Code

Claude Code is Anthropic's official CLI agent. Proprietary, closed source. Claude Code recently added builtin sandboxing, but container-based isolation is stronger and provides a reproducible environment.

Gemini CLI

Gemini CLI is Google's agent CLI. Apache-2.0 licensed.

Gemini CLI has a "sandbox" mode using Docker, but the sandboxing is insufficient for security-conscious use:

  • The sandbox isolates filesystem access, but credentials (API keys, tokens) are still passed into the container environment
  • There is no credential scoping—if you give the agent a GitHub token, it has full access to all repos that token can reach
  • No network isolation beyond what Docker provides by default
  • No fine-grained control over what the agent can do with external services
  • No devcontainer.json support—you can't use your project's existing dev environment spec

devaipod addresses these gaps: the agent container has no direct access to your GitHub token; instead, all GitHub operations go through service-gator which enforces scopes (e.g., only draft PRs to a specific repo).

Goose

Goose from Block is an extensible AI agent with MCP (Model Context Protocol) support. Apache-2.0 licensed, fully open source, runs locally without builtin sandboxing.

Orchestration Platforms

OpenHands

OpenHands (formerly OpenDevin) is an open platform for AI software developers. It provides a web interface for managing agent sessions with Docker-based sandboxing. MIT licensed.

OpenHands is a more complete platform with its own web UI. devaipod focuses on CLI-first workflows, devcontainer.json compatibility, and fine-grained credential scoping via service-gator.

Ambient Code Platform

Ambient Code Platform is a Kubernetes-native platform for running AI coding agents. MIT licensed (except for Claude Code), self-hostable.

Ambient Code targets team/organization deployment on Kubernetes. devaipod targets individual developer workstations with zero infrastructure beyond podman. Both projects solve credential scoping—Ambient Code's broker architecture influenced devaipod's service-gator integration.

The devaipod project would like to align more with Ambient Code. A few things:

paude

Following is Assisted-by: OpenCode (Opus 4.5)

paude is a Python CLI that runs AI coding agents (Claude Code, Cursor CLI, Gemini CLI) inside secure containers. MIT licensed. It has a pluggable backend architecture with both Podman and OpenShift implementations, making it the closest existing project to what devaipod is trying to do with Kubernetes support.

The OpenShift backend is particularly interesting as prior art for devaipod's Kubernetes plans. paude's approach:

  • Uses oc CLI (subprocess) rather than a native Kubernetes client library. devaipod plans to use kube-rs instead, avoiding subprocess overhead and output parsing.
  • Creates StatefulSets (not bare Pods) for workspace lifecycle, with scale-to-zero for stop/start. devaipod's pod model maps more naturally to bare Pods since each workspace is a multi-container pod with a specific lifecycle.
  • Uses oc exec stdin/stdout tunneling with git's ext:: protocol for code sync -- the agent makes commits inside the pod, and git pull tunnels through oc exec. This sidesteps the port-forward fragility problem entirely. devaipod should consider this pattern for Model 3 (hybrid local/remote).
  • Credentials go into a tmpfs emptyDir volume (RAM-only, never persisted), synced via oc cp. This is a stronger security posture than writing credentials to a PVC.
  • Network egress filtering uses a squid proxy container for Podman and Kubernetes NetworkPolicy for OpenShift, similar in spirit to how devaipod isolates agent network access via service-gator -- though service-gator operates at the API level rather than the network level.

Key differences from devaipod: paude is agent-agnostic (wraps Claude Code, Cursor, Gemini CLI) while devaipod integrates deeply with OpenCode. paude has no devcontainer.json support and uses a single container per session rather than devaipod's multi-container pod (workspace + agent + gator + api). paude has no credential scoping equivalent to service-gator -- network-level filtering is a blunter instrument than API-level scoping.

The git-over-exec-tunnel pattern is worth stealing for devaipod's hybrid model. And paude's tmpfs credential storage is a good security practice that devaipod should adopt when running in Kubernetes.

Kortex

(This section is 85% Opus 4.6+OpenCode research, only superficial human review)

Kortex is an Electron/Svelte desktop application for AI-powered container and Kubernetes management. Apache-2.0 licensed, evolved from Podman Desktop.

Kortex occupies a different niche than devaipod: rather than sandboxing AI agents, it provides a desktop GUI that integrates AI with container and Kubernetes management. It has a pluggable "flow provider" abstraction, with Goose as the current implementation. Goose is downloaded and spawned as a CLI subprocess (goose run --recipe <path>); the flow provider interface is generic enough that other agents could be plugged in via extensions.

Interesting aspects of the Goose integration:

  • MCP passthrough: When creating a flow, users select from MCP servers registered in Kortex. Credentials are retrieved from secure storage and embedded into the Goose recipe YAML as extensions with streamable_http URIs and auth headers. This is a form of credential management, though not scoped per-operation like service-gator.
  • GUI on top of Goose: Kortex adds a full web UI for flow creation (with AI-assisted parameter extraction from prompts), execution (xterm.js terminal streaming Goose stdout/stderr), and Kubernetes deployment (generates Job + Secret + ConfigMap YAML).
  • K8s deployment: Flows can be deployed as Kubernetes Jobs running a hardcoded quay.io/kortex/goose container image (built externally in packit/ai-workflows) with the recipe mounted via ConfigMap. The image is not user-configurable. The Job is minimal: single container, no sidecars, no resource limits, no security context.
  • Chat-to-flow export: Users can export chat conversations (powered by inference providers like Gemini) into Goose recipes, bridging interactive AI chat with automated workflows.

Key differences from devaipod:

  • No agent sandboxing: Goose runs locally as a bare child_process.spawn() with full host access. No container wrapping for local execution at all.
  • No devcontainer/devfile support: Kortex has no concept of devcontainer.json or devfiles. The execution environment is either the host (local) or a hardcoded container image (K8s). Users cannot define or customize the runtime environment.
  • Hardcoded image: The K8s deployment image (quay.io/kortex/goose:2025-09-03) is a compile-time constant with no user override. The image just contains the goose binary; there's nothing else special in it.
  • GUI-first vs CLI-first: Desktop application vs terminal tool.
  • AI manages infrastructure: Kortex uses AI to help manage containers/K8s; devaipod uses containers to sandbox AI that writes code.

The projects could be complementary: Kortex could manage the container/K8s infrastructure that devaipod pods run on. More concretely, Kortex's MCP integration means it could consume service-gator as a tool provider, which would add the credential scoping that Kortex currently lacks for its Goose integration.

Gyre

(This section is Assisted-by: OpenCode (Claude Opus 4.6) research, but was human reviewed)

Gyre is an autonomous software development platform built in Rust and Svelte. The repository has no LICENSE file, though the Cargo.toml says MIT. That should probably be expanded.

Gyre provides its own built-in git forge (Smart HTTP transport), merge queue, agent orchestrator, and identity provider. Agents are single-purpose, spawned via API, given a git worktree and scoped bearer token, and torn down after completing their task. External repos can be pull-mirrored into Gyre, but all agent work happens inside Gyre's forge.

Key points for comparison with devaipod:

  • No devcontainer.json. Agent environments use a "compute target" abstraction (local processes, Docker/Podman, SSH, Kubernetes), though current implementation spawns local OS processes. Nix flake for the project's own development.
  • No per-agent container sandboxing. The Gyre server can run in a container (Dockerfile) or NixOS VM, but agents spawned by it are local processes with git worktree + scoped token isolation. The specs describe container/K8s compute targets and eBPF audit, but these appear unfinished.
  • No outbound forge flow. Repos can be one-way mirrored into Gyre, but there is no documented mechanism for pushing agent work back out as a GitHub PR. devaipod + service-gator is designed for exactly this -- agents opening scoped PRs on existing forges.
  • Supply chain security is ambitious: gyre-stack.lock pins agent configuration (AGENTS.md hash, MCP servers, model ID), and pushes with non-matching stacks are rejected. Three attestation levels from "raw git push" to "Gyre-managed runtime with eBPF + SPIFFE."

cgwalters: One possible intersection here: the "local forge" mode could be an optional thing devaipod runs or configurable alongside it. I actually investigated forgejo for this purpose in the past. It also seems like gyre could learn to reuse the devcontainer backend logic from devaipod?

Auto-Claude

Auto-Claude is an autonomous multi-agent coding framework with a desktop UI, Kanban board, and parallel agent execution. AGPL-3.0 licensed.

Auto-Claude has excellent UI/UX but runs agents directly on the host with full system access—no sandboxing. devaipod could serve as a sandboxed backend for Auto-Claude's interface.

Gastown

Gastown (from Steve Yegge) is a multi-agent orchestration system for Claude Code. MIT licensed, written in Go. It provides workspace management, agent coordination via "convoys", and persistent work tracking through git-backed "hooks" (git worktrees).

Gastown focuses on orchestration rather than sandboxing:

  • No container isolation—agents run in tmux sessions with full host filesystem access
  • No credential scoping—agents receive your full GitHub token, API keys, etc.
  • Claude Code runs with --dangerously-skip-permissions by default
  • No devcontainer.json support
  • Isolation is via git worktrees (separate working directories) and prompt-based instructions to "stay in your worktree"

Gastown and devaipod solve different problems and could be complementary: Gastown for orchestrating work distribution across many agents, devaipod for sandboxing individual agent execution with credential scoping.

krunai

As far as I can see krunai is really another virtual machine launcher, it doesn't truly do much special for AI workloads - or even arguably anything at all other than having an example init script that downloads a particular CLI tool.

I think what devaipod is doing using devcontainers make sense as a mechanism to allow users to control their workload environment, and there's already good tooling to optionally launch podman/kube containers wrapped in VMs if desired.

I also think in the general case one really wants good affordance for git integration, output review etc.

Open Core (Partial Cloud Dependencies)

Continue

Continue provides VS Code and JetBrains extensions, plus a CLI. The extensions and CLI are Apache-2.0.

Cloud dependency: "Mission Control" (hub.continue.dev) is Continue's proprietary cloud platform for running cloud agents. The backend code is not open source. Local CLI execution has no sandboxing.

SWE-agent

SWE-agent from Princeton NLP provides an agent-computer interface for software engineering tasks. MIT licensed.

Cloud dependency: The "Open SWE" product runs on Daytona, a commercial cloud service for dev environments.

Proprietary / Cloud-Required

Ona

Ona is a commercial AI agent platform. Requires cloud services—there is no open source version or self-hosted option.

Cursor

Cursor is a commercial AI-first code editor based on VS Code. Proprietary, cloud-connected.

Claude Code Web

Claude Code is also available as a hosted web service at claude.ai. Anthropic runs it in their own sandboxed infrastructure with a git proxy for credential scoping (described in their sandboxing blog post). However, that sandbox code is not open source—you cannot run it yourself. If you want similar sandboxing locally, you need something like devaipod.

Other Sandboxing Tools

Docker AI Sandboxes

Docker AI Sandboxes is Docker's solution for running AI coding agents in isolated environments. It uses lightweight microVMs with private Docker daemons for each sandbox.

devaipod is just a wrapper for podman and uses the devcontainer.json standard.

Note that the use case of running containers inside the sandbox is captured via nested containerization: VMs are not required.

  • Licensing: Docker Sandboxes is part of Docker Desktop, which is proprietary software requiring paid subscriptions for commercial use in organizations with 250+ employees or $10M+ revenue; devaipod is fully open source (Apache-2.0/MIT)
  • Platform: Docker Sandboxes requires Docker Desktop with microVM support (macOS, Windows experimental); devaipod uses podman and works on Linux natively
  • Credential scoping: Docker Sandboxes provides isolation but does not mention fine-grained credential scoping like service-gator; devaipod can limit agent access to specific repos/operations

nono

nono (GitHub) is an OS-level sandboxing tool for AI agents. Apache-2.0 licensed, created by Luke Hinds (creator of Sigstore).

nono defaults to Landlock on Linux and Seatbelt on macOS. I think OCI containers provide more security and are more flexible and well understood by tools. Further, containers provide reproducible environments that are just a foundational piece.

Landlock is complementary to containerization, but how nono is doing it is conceptually against what the Landlock creators want in my opinion: Landlock was supposed to primarily used by apps to sandbox themselves, not as a container-replacement framework.

NVIDIA OpenShell

(This section is Assisted-by: OpenCode (Claude Opus 4.6) research, but has been refined and edited)

On NVIDIA OpenShell there's a lot of overlap. One obvious thing here is that it does a pretty wild thing in running k3s inside docker (which would probably also work with podman), whereas devaipod leans into the native support for podman pods. However there are also clear advantages to k3s-in-container, among them it makes it much easier to have symmetric support for a real remote Kuberentes cluster.

I think service-gator as MCP is a stonger/better solution than the generic REST proxy. We're coming at these things from a very similar space, but a key thing here with service-gator is that the tokens are not accessible to the agent at all. OpenShell is the closest project to devaipod in goals: both sandbox AI agents with fine-grained controls rather than just filesystem isolation. Key similarities and differences:

  • Sandboxing approach: OpenShell uses Landlock (kernel LSM) for filesystem restrictions plus seccomp for syscall filtering, layered inside Docker containers. devaipod uses OCI containers via podman with rootless execution. The author of devaipod thinks LandLock was not intended for what OpenShell or nono.sh are doing with it and it's mostly unnecessary.
  • Network control: OpenShell intercepts all outbound connections via an HTTP CONNECT proxy that matches destination + calling binary against a declarative YAML policy. devaipod does not isolate network access by default (although one could configure some of that at the container networking level). service-gator is used by devaipod for safe credential-based access to specific services, but it could also be used as an MCP server in OpenShell.
  • Credential management: OpenShell uses "providers" — named credential bundles injected as environment variables at sandbox creation. Credentials are injected at runtime and never written to the sandbox filesystem. devaipod uses service-gator to avoid passing credentials to the agent at all — the agent never sees the GitHub token, it only gets scoped MCP tool access. This is a stronger isolation model for the services service-gator supports.
  • Architecture: OpenShell runs a K3s cluster inside Docker and uses a gateway/sandbox control-plane model. This is heavier than devaipod's podman pod approach (no Kubernetes layer), but positions OpenShell better for multi-tenant and remote deployment (it already supports local, remote via SSH, and cloud gateway modes).
  • Agent support: OpenShell is agent-agnostic — it wraps Claude Code, OpenCode, Codex, OpenClaw, and Ollama. devaipod integrates deeply with OpenCode at the moment, but supporting other agent types is a possibility.
  • Inference routing: OpenShell has a built-in privacy router that intercepts LLM API calls and can redirect them to local or self-hosted backends, stripping/replacing credentials. devaipod has no equivalent — inference routing is handled by the agent's own configuration.
  • devcontainer.json: devaipod uses the devcontainer.json standard for defining the agent environment. OpenShell uses community sandbox images and supports BYOC (bring your own container) but has no devcontainer.json integration.
  • git support: Devaipod aims to have strong, native support for git, but I don't see this in OpenShell
  • Platform: OpenShell requires Docker. devaipod uses podman (but could also pretty easily use docker). It is also a goal to support targeting Kubernetes.

The projects share the same fundamental insight that sandboxing AI agents requires more than filesystem isolation — you need network egress control, credential scoping, and defense-in-depth.

In a nutshell, I am considering:

  • Rebasing devaipod on OpenShell
  • Trying to contribute service-gator to that project

gjoll

(This section is Assisted-by: OpenCode (Claude Opus 4.6), based on source code analysis of the gjoll repository)

gjoll is a Go CLI tool that provisions cloud VM sandboxes for coding agents using standard OpenTofu .tf files. Apache-2.0 licensed, experimental. The design philosophy is radical simplicity: gjoll injects three variables into your .tf file (gjoll_ssh_pubkey, gjoll_name, gjoll_instance_state), runs tofu apply, and gets out of the way. It supports any cloud provider that has an OpenTofu provider (AWS, Proxmox, libvirt/QEMU, etc.).

The architecture is interesting because it solves similar problems to devaipod but makes fundamentally different trade-offs — full VMs instead of containers, SSH-based git transport instead of forge integration, and HTTP reverse proxies instead of MCP-based credential scoping.

(Note from devaipod author: Nothing wrong with provisioning classic mutable VMs, but I think containers are architecturally the right choice; where VM isolation on top of containerization is desired, there's tons of tools for that)

Git workflow — gjoll has a dedicated git sync mechanism via gjoll push and gjoll pull. push initializes a repo on the VM with receive.denyCurrentBranch=updateInstead, sets the remote HEAD to match the local branch via git symbolic-ref, and pushes over SSH (GIT_SSH_COMMAND=ssh -F <config>). The working tree on the VM updates immediately — no separate checkout step. pull fetches back from the VM and creates a local branch named gjoll-<name> (hyphens, not slashes, to avoid breaking tools like lazygit).

(devaipod author: This is much less heavyweight than devaipod's choice to have a git clone per pod, and has clear advantages. Similar to paude in that respect.)

The workflow is: push code to VM → agent works → pull changes back → human creates PR locally.

By contrast, devaipod's service-gator provides the agent with scoped forge access, and we plan to invest in having a good review process inside the UI, and also allow some autonomous updates.

Credential gating — this is gjoll's most distinctive feature and the area of strongest overlap with devaipod's goals. The gjoll proxy command runs local HTTP reverse proxies on the host that inject authentication headers, with SSH reverse tunnels (-R) making them reachable on the VM as localhost:<port>. Credentials never leave the host machine.

Three auth modes are supported: gcp (GCP Application Default Credentials via google.DefaultTokenSource(), with automatic token refresh), api-key (static key read from a local file, injected as x-api-key header), and no-auth passthrough. The proxy binds to 127.0.0.1:0 only — not network-reachable. Token fetch failures surface as 502 errors rather than forwarding unauthenticated requests.

However, the proxy provides full, unscoped access to the upstream API. Any request to http://localhost:<port>/any/path is forwarded with credentials attached. There is no URL path filtering, no HTTP method restrictions, no rate limiting, and no audit logging beyond error messages. A misbehaving agent can make any API call the credential allows.

There is no support for GitHub tokens — the proxy is designed for LLM API access (Vertex AI, Anthropic), not forge operations. To give a sandboxed agent GitHub access, you'd need to either extend the proxy with a new auth mode for Bearer tokens and add path-level scoping, or copy the token to the VM directly (their ubuntu-claude.tf example shows a commented-out copy_files approach for this, though the newer ubuntu-claude-vertex.tf example explicitly avoids it in favor of proxying).

The contrast with service-gator is architectural: gjoll gives the agent a raw HTTP pipe with credentials injected (network-level proxy), while service-gator gives the agent semantic tools with per-operation permission checks (MCP-level scoping). You can tell service-gator "this agent can create draft PRs on owner/repo but cannot force-push or delete branches." gjoll's proxy has no equivalent — it's all-or-nothing per API target.

The proxy model is well-suited for LLM API access where you want the agent to make arbitrary API calls to the model provider. service-gator is better for forge operations where you want to constrain what the agent can do. The two approaches are complementary rather than competing.

Other notable differences from devaipod:

  • Isolation unit: Full cloud VMs (via OpenTofu) vs. OCI containers (via podman). VMs provide stronger isolation but are heavier — gjoll requires cloud infrastructure or local libvirt/QEMU, while devaipod runs with just podman.
  • Environment definition: Raw .tf files vs. devcontainer.json. gjoll is maximally flexible but requires HCL knowledge; devaipod uses the standard devcontainer spec.
  • SSH security: Per-sandbox ed25519 keypairs with IdentitiesOnly yes and IdentityAgent none (no agent forwarding). But StrictHostKeyChecking no for ephemeral VMs — pragmatic but means no MITM protection.

Why devaipod?

  1. Fully open source: Apache-2.0/MIT, no "open core" trap
  2. 100% local: No cloud services required (you bring your own LLM API keys)
  3. devcontainer.json: Uses the standard spec, not custom formats
  4. Fine-grained credential scoping: service-gator MCP provides scoped access (e.g., draft PRs only to specific repos)—not just filesystem sandboxing
  5. Podman-native: Rootless containers, works in toolbox, no Docker daemon required

Reusable Components

A design goal for devaipod is that its core components should be reusable building blocks, not a monolithic system. Projects like OpenHands, Ona, and Ambient Code are building centralized platforms for corporate/team agentic AI usage. We hope that a fully open source version of such a platform emerges, and when it does, components from devaipod should be useful:

  • service-gator: Fine-grained credential scoping for GitHub/GitLab/Forgejo could plug into any orchestration system
  • Container sandboxing patterns: The podman pod architecture with separate workspace/agent/gator containers
  • devcontainer.json integration: Parsing and applying the devcontainer spec for agent environments

devaipod is designed for individual developers today, but the primitives should scale to team/org deployment when composed with appropriate orchestration.