Docker Container Backups: Leveling Up with v2.0
Back in February, I wrote about how I finally gave my home lab a real backup strategy using a containerized Flask server, rclone, and OneDrive. The solution worked well — but it only worked for a single host. If you run containers across multiple machines, you were on your own.
That changes with v2.0.
What Was Missing
The original setup was straightforward: one container, one host, one containers.json, and a cron job to kick things off. It solved the problem I had at the time.
But home labs grow. As I added more hosts, I found myself duplicating the setup and having no single place to check on the health of all my backups. That itch needed scratching.
Introducing the Hub/Spoke Architecture
v2.0 introduces a hub/spoke model for multi-host backup orchestration. Every instance of flask-container-backup is now a spoke by default — it behaves exactly as it did in v1.0. Nothing breaks.
The new piece is the hub. Set MODE=hub in your environment, point it at a spokes.json config file listing your remote spoke agents, and you now have a central orchestrator that can coordinate backups and aggregate status across your entire fleet.
environment:
- MODE=hub
[
{ "name": "host-a", "url": "http://192.168.1.10:2128" },
{ "name": "host-b", "url": "http://192.168.1.11:2128" }
]
A couple of guardrails worth noting: if you set MODE=hub but spokes.json is missing or empty, the container will exit at startup with a fatal error. No silent failures.
New: The /status Endpoint
Every spoke now exposes a GET /status endpoint that returns the result of the last backup run — including a timestamp, which containers were backed up, and any errors encountered. Results are also written to backup_result.json after each run, so they survive a container restart.
{
"timestamp": "2026-04-08T13:00:00",
"containers_backed_up": ["caddy", "freshrss", "homeassistant"],
"errors": []
}
The hub aggregates this across all configured spokes when you hit its own /status endpoint, giving you a unified view of backup health across every host in one call.
Config Consolidation
All config files — containers.json, spokes.json, and the new backup_result.json — now live in /app/config, which maps to a single mounted volume. Cleaner, and easier to manage.
Upgrading
If you're already running v1.0, upgrading is non-breaking. Pull the new image, update your compose file to mount /app/config, and you're done. The MODE environment variable defaults to spoke, so existing single-host setups continue to work exactly as before.
services:
container-backup:
image: zackwag/flask-container-backup:latest
container_name: container-backup
restart: unless-stopped
ports:
- 2128:2128
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /docker/container-backup/config:/app/config
- /docker:/source
- onedrive-backup:/destination
environment:
- PYTHONUNBUFFERED=1
- TZ=America/New_York
The image is available on Docker Hub at zackwag/flask-container-backup, and the source is on GitHub.