Making devcontainers even better in the terminal
I’ve been using devcontainers for development work and I’m a fan. Consistent environments, isolated dependencies, everything defined in a devcontainer.json that the whole team shares. The devcontainer CLI is the foundation of this, and if like me you prefer working in a terminal rather than VS Code, it’s been a solid companion.
Except for two things that are really annoying:
1. Port forwarding: When a process inside a devcontainer listens on a port (say, a dev server on :3000), VS Code automatically makes it accessible on your host machine. The devcontainer CLI does not.
2. Browser opening: When a container process tries to open a URL (think OAuth callbacks, documentation links), VS Code opens it in your host browser. The devcontainer CLI cannot.
These two gaps break real workflows. OAuth flows that bind a random port, open a browser, and expect a callback on localhost fail completely. Dev servers are unreachable without manual port forwarding in devcontainer.json.
I built devcontainer-bridge (dbr) to fix both.
How dbr works
dbr uses a reverse connection model (similar to ssh -R) where all TCP connections flow container to host. The container daemon initiates everything outward, and the host daemon multiplexes over those connections.
┌───────────────────── Host Machine ──────────────────────────────────┐
│ │
│ dbr host-daemon │
│ ├─ Control: :19285 (JSON-lines protocol) │
│ ├─ Data: :19286 (reverse data connections) │
│ ├─ Binds loopback:PORT for each forwarded port │
│ └─ Opens URLs in host browser (open/xdg-open) │
│ │
└─────────────────────────────────────────────────────────────────────┘
▲ All connections initiated container → host
│ via host.docker.internal
┌─────────┴────────── Devcontainer ───────────────────────────────────┐
│ │
│ dbr container-daemon │
│ ├─ Polls /proc/net/tcp every 1s for new listeners │
│ ├─ Sends Forward/Unforward to host via control channel │
│ └─ Opens reverse data connections for proxied traffic │
│ │
└─────────────────────────────────────────────────────────────────────┘
The two daemons communicate over a JSON-lines protocol on a persistent control channel. A separate data channel carries the actual proxied TCP traffic.
The container daemon polls /proc/net/tcp every second to detect new listening ports (the same approach VS Code uses internally). When it finds one, it tells the host daemon to bind a loopback listener on the matching port. When traffic arrives on the host side, it’s proxied through to the container via a reverse data connection. One host daemon serves all your running devcontainers, handling port conflicts automatically by assigning alternative host ports when needed.
For browser opening, setting BROWSER=dbr-open in your container shell profile is all it takes. When a tool calls $BROWSER, the URL gets sent to the host daemon, which opens it with open (macOS) or xdg-open (Linux). URLs are even rewritten automatically if a container port has been remapped to a different host port, which keeps OAuth callbacks working even when ports shift.
Getting started
Setup is straightforward.
-
Install the host binary, or build from source if you prefer.
curl -fsSL \ https://github.com/bradleybeddoes/devcontainer-bridge/releases/latest/download/install.sh \ | bash -
Start the host daemon
dbr host-daemonThis runs in the foreground. If you’d prefer something you can drop into a shell alias or dotfile,
dbr ensurechecks whether the host daemon is already running and starts it if not. -
Add the devcontainer feature to your project
{ "features": { "ghcr.io/bradleybeddoes/devcontainer-bridge/dbr:0": {} } }This installs
dbrinside the container and starts the container daemon automatically on boot via the feature’s entrypoint. No manual setup inside the container. -
Configure browser integration
Add this to your
~/.zshrcor~/.bashrcinside the container (via your personal dotfiles):export BROWSER=dbr-openMost tools respect the
BROWSERenvironment variable: Node.jsopen, Pythonwebbrowser, Rust’sopencrate, and others. -
Verify
$ dbr status Container Port Host Port Process Since myapp_dev 8080 8080 node 2m ago myapp_dev 39821 39821 mcp-auth 5s agoThat’s it. Ports are forwarded, browsers open on the host.
Safe for mixed teams
If you’re working on a team, you might be wondering whether adding this to a shared config is safe. One of the design constraints was that dbr needed to be safe to add to shared devcontainer.json files in teams where some people use VS Code and others use the CLI.
VS Code users are not impacted. The dbr binary is installed but the system is effectively inert without the host daemon running. It doesn’t set global environment variables or interfere with VS Code’s own port forwarding. Think of it like having nvim installed but unused.
The container daemon does run in the background (started by the feature’s entrypoint), but it only initiates outbound TCP connections. It binds no ports inside the container. If no host daemon is running, it reconnects silently with exponential backoff and negligible resource cost.
Terminal developers activate dbr through personal dotfiles and shell configuration, not shared project files. VS Code developers won’t notice it’s there.
Security considerations
Given my background in security and compliance, this was something I thought about from the start.
dbr uses a two-tier binding model. Forwarded port listeners always bind to loopback only (127.0.0.1 / [::1]), never 0.0.0.0. Your forwarded dev server is only reachable from your machine, not the network. The control and data ports bind to 0.0.0.0 only when Docker is detected (necessary for Docker Desktop on macOS, where containers reach the host via a gateway IP) and fall back to loopback otherwise. Override with --bind-addr or --no-docker-detect if you need explicit control.
The host daemon accepts a fixed set of protocol messages. A container cannot instruct the host to run arbitrary commands. The only host-side actions are binding loopback ports and opening validated HTTP/HTTPS URLs. URLs are passed directly as process arguments (not through a shell), so there’s no command injection surface.
Resource limits are enforced throughout: 64 KB message cap, max 64 concurrent containers, max 128 forwards per container, rate-limited URL opens. The entire project uses zero unsafe Rust code.
The full threat model is documented in security.md.
Building it with Claude Code agent teams
This section is a bit of a departure from the tool itself, but it’s relevant to my ongoing exploration of AI-assisted development.
I built dbr as an exploratory collaboration with Claude Code agent teams. I did the planning and design work: defining the problem, choosing the technology (Rust, tokio for async, serde for the protocol), sketching the architecture, and writing the specification. Then I handed the implementation to a team of 17 AI agents.
The agents created the codebase, tests, documentation, and end-to-end validation scripts. They did exceptionally well. But the application didn’t function straight out of the box, which I expected given the scope. I then spent a few more hours pair programming with Claude Code to debug, refine, and get everything working.
A few honest observations from the experience:
What worked well: The agents produced well-structured Rust code that followed the architecture I’d specified. The protocol implementation, the /proc/net/tcp parser, the async connection handling, the CLI, the documentation — all were high-quality starting points. Having 17 agents work in parallel meant the initial codebase came together remarkably quickly.
What needed human attention: Integration between systems was where things fell apart. The way components connected had subtle bugs. The kind of issues you only find by actually running end to end. We’ve since created a full e2e script that invokes multiple containers so this won’t happen again — perhaps a necessary part of the plan going forward for any networked tool.
My takeaway: AI agent teams are a genuinely useful tool for bootstrapping projects, especially when you invest in clear specification and architecture upfront. The better your input, the better their output. But “ship it and walk away” isn’t where we are yet. That may well change as the technology continues to improve and we learn more about optimising inputs. I intend to keep experimenting both personally and across teams I lead.
Early days
This project is new and in active development. It’s been tested on macOS only so far; Linux host support is implemented but unverified. Bugs are expected.
If you’re a terminal-first devcontainer user and want to give it a try, I’d genuinely appreciate the feedback. Issues and contributions are welcome at github.com/bradleybeddoes/devcontainer-bridge.
