In my quest for the cheapest way to deploy a containerized app, I found out about the DOCKER_HOST
argument to the Docker CLI. It's cooler than it sounds, and recent chats with Internet strangers convinced me that not enough people know about it. So let's kick the tires.
The Docker CLI talks to the daemon (server) on the local machine by default
This is the kind of thing you read once and then probably forget about, but Docker is a client-server application.
$ docker info
Client: Docker Engine - Community
Version: 28.3.1
Context: default
# etc.
Server:
Containers: 21
Running: 0
Paused: 0
Stopped: 21
Images: 246
Server Version: 28.3.1
# etc.
But of couse when you run somthing like docker run --rm busybox echo "hi"
, except for pulling the busybox:latest
image everything is happening on you local machine. Let's "prove" that.
$ docker --help | grep daemon
-c, --context string Name of the context to use to connect to the daemon (overrides DOCKER_HOST env var and default context set with "docker context use")
So DOCKER_HOST
is the environment variable that can override the Docker context. We can see the context used by default:
$ docker context inspect default
[
{
"Name": "default",
"Metadata": {},
"Endpoints": {
"docker": {
"Host": "unix:///var/run/docker.sock",
"SkipTLSVerify": false
}
},
"TLSMaterial": {},
"Storage": {
"MetadataPath": "\u003cIN MEMORY\u003e",
"TLSPath": "\u003cIN MEMORY\u003e"
}
}
]
unix:///...
is talking on a Unix socket on the local machine. Cool. So if we stop the Docker daemon, docker
commands should fail, right?
$ sudo systemctl list-units docker.*
UNIT LOAD ACTIVE SUB DESCRIPTION
docker.service loaded active running Docker Application Container Engine
docker.socket loaded active running Docker Socket for the API
$ sudo systemctl stop docker.service docker.socket
$ docker run --rm busybox echo "hi"
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Run 'docker run --help' for more information
Cool. So what good is it to tell the Docker client to talk to a different server?
Tell a remote Docker server to run containers
My local Docker daemon is stopped, but I just created a cheap Ubuntu server and installed Docker on it. I can tell my client to connect to the Docker daemon on that server using SSH:
$ DOCKER_HOST=ssh://root@174.138.83.113 docker info
Client: Docker Engine - Community
Version: 28.3.1
# ...
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 28.3.2
# ...
Wonderful. And now every Docker command works, but it runs on that remote server:
$ DOCKER_HOST=ssh://root@174.138.83.113 docker run --rm busybox echo "hi"
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
90b9666d4aed: Pull complete
Digest: sha256:f85340bf132ae937d2c2a763b8335c9bab35d6e8293f70f606b9c6178d84f42b
Status: Downloaded newer image for busybox:latest
hi
What about docker build
?
Here are the files needed to build a Docker image that serves an HTML page:
# Dockerfile
FROM caddy:2.9.1 AS caddy
COPY Caddyfile /etc/caddy/Caddyfile
COPY index.html /usr/share/caddy/index.html
# Caddyfile
:80 {
root * /usr/share/caddy
file_server
}
<!-- index.html -->
<!doctype html>
<html>
<head>
<meta charset="utf-8" />
<title>Hello, remote Docker host!</title>
<meta name="viewport" content="width=device-width, initial-scale=1" />
</head>
<body>
<p>Hello, remote Docker host!</p>
</body>
</html>
Of course I still can't build it locally:
$ docker build -t docker-remote-host:test .
ERROR: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
But I can build it remotely:
$ DOCKER_HOST=ssh://root@174.138.83.113 docker build -t docker-remote-host:test .
[+] Building 4.3s (8/8) FINISHED docker:default
=> [internal] load build definition from Dockerfile
=> transferring dockerfile: 143B
=> [internal] load metadata for docker.io/library/caddy:2.9.1
=> [internal] load .dockerignore
# ...
=> naming to docker.io/library/docker-remote-host:test
Where is the image? It's on the remote machine's filesystem:
$ DOCKER_HOST=ssh://root@174.138.83.113 docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
docker-remote-host test 0a6c61f9af6e 2 minutes ago 48.5MB
busybox latest 6d3e4188a38a 9 months ago 4.28MB
Where did my "source code" go? The index.html
and Caddyfile
must go over the wire to the remote server to be built into the image, but they do not end up on the remote server as files. The only new "thing" as a result of docker build
is the image, which lives in the remote server's internal container registry.
What's something remotely interesting about this, though?
I came across this idea while looking for a way to make little proof-of-concept projects more cheaply. The user-friendly container registries all want $5/month, and the other registries are AWS. For some reason I didn't want to use either at the time. With this trick, though, you can build images and run containers directly on a remote server with no separate, hosted container registry. I won't bother to show you, but trust me that this DOCKER_HOST
works pretty seemlessly with docker compose
as well so you can docker compose up -d
and suddenly your application and database are running remotely. You can pack multiple projects onto a single VPS for even more frugality!
One nasty thing to watch out for is environment variable interpolation in Docker Compose files, though. It is easy to accidentally have your development values slurped up into Docker Compose and sent off to production if you do both from the same terminal session. Maybe some strict use of direnv
could help?