v0.3.0 — Now on PyPI

Give your AI agent
its own computer

A self-hosted sandbox that boots a real virtual machine for AI agents. Hardware-isolated by Firecracker. No containers. No cloud.

bunkervm
~100MB
Bundle size
~5s
Boot time
8
MCP tools
0
Cloud dependencies

How the agent connects

Your AI talks to BunkerVM over MCP. BunkerVM boots a Firecracker MicroVM and relays commands over vsock. The agent never touches your host.

🤖

AI Agent

Claude, GPT, LangGraph

MCP
stdio
⚙️

bunkervm

Host process

vsock
isolated
🛡️

Firecracker VM

Alpine Linux sandbox

Everything your agent needs

Eight MCP tools give full Linux environment access — including file transfer between host and VM. All commands run inside the VM — never on your host.

sandbox_exec

Run any shell command — Python, bash, curl, whatever the agent needs.

sandbox_write_file

Create or overwrite files inside the VM filesystem.

📄

sandbox_read_file

Read file contents back from the VM to the agent.

📁

sandbox_list_dir

Browse the VM directory tree.

📈

sandbox_status

Check VM health — CPU, RAM, disk, uptime.

📤

sandbox_upload_file

Upload files from your host into the VM.

📥

sandbox_download_file

Download files from the VM back to host.

🔄

sandbox_reset

Wipe everything. Clean slate in seconds.

VMs vs Containers

Containers share your kernel. A VM escape is orders of magnitude harder than a container escape.

BunkerVM MicroVM

Separate kernel — hardware boundary
Near-zero escape risk
pip install — done
No cloud, no API keys
Optional internet access

Docker Container

Shared kernel with host
Container escapes exist
Faster boot (~0.5s)
Huge ecosystem
More mature tooling

Claude Desktop config

Add this to your config file. On first run, BunkerVM downloads the micro-OS automatically.

💻 Windows (WSL2)
{
  "mcpServers": {
    "bunkervm": {
      "command": "wsl",
      "args": ["-d", "Ubuntu",
        "--", "sudo",
        "python3", "-m",
        "bunkervm"]
    }
  }
}
🐧 Linux / macOS
{
  "mcpServers": {
    "bunkervm": {
      "command": "sudo",
      "args": ["python3",
        "-m",
        "bunkervm"]
    }
  }
}

Works with any MCP client

BunkerVM speaks the Model Context Protocol.

Claude Desktop
LangGraph
LangChain
OpenAI Agents SDK
CrewAI
Any MCP Client

Current limitations

It works, but it's early.

Needs sudo
x86_64 only
Requires KVM / WSL2

Works with every major framework

LangGraph, OpenAI Agents SDK, CrewAI — install the extras, import the toolkit. Every tool call is logged.

🤖 examples/test_agent.py
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
from bunkervm.langchain import BunkerVMToolkit

toolkit = BunkerVMToolkit()  # auto-connects to running VM
agent = create_react_agent(ChatOpenAI(model="gpt-4o"), toolkit.get_tools())

result = agent.invoke({
    "messages": [("human", "Write a script that finds primes under 50, run it")]
})
💡 OpenAI Agents SDK
from agents import Agent, Runner
from bunkervm.openai_agents import BunkerVMTools

agent = Agent(
    name="coder",
    tools=BunkerVMTools().get_tools(),
)
result = Runner.run_sync(agent, "Find primes under 50")
⚙️ CrewAI
from crewai import Agent, Task, Crew
from bunkervm.crewai import BunkerVMCrewTools

coder = Agent(
    role="Engineer",
    tools=BunkerVMCrewTools().get_tools(),
)
Crew(agents=[coder], tasks=[task]).kickoff()
$ pip install bunkervm[all]  # LangGraph + OpenAI + CrewAI

→ write_file: /tmp/primes.py
  ← wrote 312 bytes
→ run_command: python3 /tmp/primes.py
  ← 2 3 5 7 11 13 17 19 23 29 31 37 41 43 47

🤖 The prime numbers under 50 are: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47.

Ready to isolate your agents?

One command to install. One config to connect.