From the Keyboard of Zachary Wagner

Just a few things on my mind.

We've all been there. You're wiring up a new Docker stack, things are finally working, and you commit and push before realizing your password is sitting right there in plain text in your compose.yaml. In my case it was MLB credentials for mlbserver. Oops.

Here's how I cleaned it up and what I'm doing going forward.

What Happened

I committed a compose.yaml with credentials hardcoded directly in the environment block:

environment:
  - account_username=zackwag@gmail.com
  - account_password=hunter2

Pushed it to a public GitHub repo. Caught it quickly, reset the password, but the damage was done — the secret was in the git history even after I deleted the file.

Removing It From History

My first instinct was BFG Repo Cleaner, but BFG matches on filename only — not path. Since I have multiple compose.yaml files across my stacks, that was a non-starter.

git filter-repo supports path filtering, which is exactly what I needed:

cd /tmp
git clone https://github.com/zackwag/docker.git
cd docker
git filter-repo --path opt/stacks/channels-addons/compose.yaml --invert-paths
git remote add origin https://github.com/zackwag/docker.git
git push --force origin main

Worth noting: git filter-repo refuses to run on a non-fresh clone by default. Clone fresh, run it there, force push. Don't fight it.

The Right Pattern Going Forward

The fix is straightforward — .env files. Keep secrets out of the compose file entirely and reference them as variables.

compose.yaml

environment:
  - account_username=${MLB_USERNAME}
  - account_password=${MLB_PASSWORD}

.env (never committed)

MLB_USERNAME=zackwag@gmail.com
MLB_PASSWORD=your_password_here

.env.example (committed as a template)

MLB_USERNAME=
MLB_PASSWORD=

.gitignore

.env

Docker Compose picks up .env automatically from the same directory as your compose.yaml. No extra configuration needed.

Not Everything Needs to Be a Secret

Worth calling out — not everything in your compose file needs to move to .env. In my Caddy stack I have things like DOMAIN, EMAIL, upstream IPs, and internal TLDs. None of that is sensitive. The rule of thumb:

  • Secrets.env (passwords, tokens, API keys)
  • Config → fine in compose.yaml (domains, IPs, emails, paths)

Bonus: Nuking Your Git History Entirely

Since I'd already made a mess of the history, I took the opportunity to squash everything down to a single clean commit:

git checkout --orphan fresh
git add -A
git commit -m "Initial commit"
git branch -D main
git branch -m main
git push --force origin main

Clean slate. Felt good.

Takeaways

  • Add .gitignore and .env.example before you write your first compose.yaml
  • If you do commit a secret, reset it immediately — history cleanup is hygiene, not the fix
  • git filter-repo is the right tool for surgical history rewrites
  • Public repo bots are fast. Assume any exposed secret was seen.

Back in February, I wrote about how I finally gave my home lab a real backup strategy using a containerized Flask server, rclone, and OneDrive. The solution worked well — but it only worked for a single host. If you run containers across multiple machines, you were on your own.

That changes with v2.0.

What Was Missing

The original setup was straightforward: one container, one host, one containers.json, and a cron job to kick things off. It solved the problem I had at the time.

But home labs grow. As I added more hosts, I found myself duplicating the setup and having no single place to check on the health of all my backups. That itch needed scratching.

Introducing the Hub/Spoke Architecture

v2.0 introduces a hub/spoke model for multi-host backup orchestration. Every instance of flask-container-backup is now a spoke by default — it behaves exactly as it did in v1.0. Nothing breaks.

The new piece is the hub. Set MODE=hub in your environment, point it at a spokes.json config file listing your remote spoke agents, and you now have a central orchestrator that can coordinate backups and aggregate status across your entire fleet.

environment:
  - MODE=hub
[
  { "name": "host-a", "url": "http://192.168.1.10:2128" },
  { "name": "host-b", "url": "http://192.168.1.11:2128" }
]

A couple of guardrails worth noting: if you set MODE=hub but spokes.json is missing or empty, the container will exit at startup with a fatal error. No silent failures.

New: The /status Endpoint

Every spoke now exposes a GET /status endpoint that returns the result of the last backup run — including a timestamp, which containers were backed up, and any errors encountered. Results are also written to backup_result.json after each run, so they survive a container restart.

{
  "timestamp": "2026-04-08T13:00:00",
  "containers_backed_up": ["caddy", "freshrss", "homeassistant"],
  "errors": []
}

The hub aggregates this across all configured spokes when you hit its own /status endpoint, giving you a unified view of backup health across every host in one call.

Config Consolidation

All config files — containers.json, spokes.json, and the new backup_result.json — now live in /app/config, which maps to a single mounted volume. Cleaner, and easier to manage.

Upgrading

If you're already running v1.0, upgrading is non-breaking. Pull the new image, update your compose file to mount /app/config, and you're done. The MODE environment variable defaults to spoke, so existing single-host setups continue to work exactly as before.

services:
  container-backup:
    image: zackwag/flask-container-backup:latest
    container_name: container-backup
    restart: unless-stopped
    ports:
      - 2128:2128
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /docker/container-backup/config:/app/config
      - /docker:/source
      - onedrive-backup:/destination
    environment:
      - PYTHONUNBUFFERED=1
      - TZ=America/New_York

The image is available on Docker Hub at zackwag/flask-container-backup, and the source is on GitHub.

I run several containers in my home lab and the missing piece to my puzzle has been backups. That is to say I have no backup strategy. In my spare time, I've been working out how to backup the container storage and I feel pretty satisfied with my current solution.

Prerequisites

I keep all my container persistent data in a folder on my host machine in the format of /docker/{service-name} so /docker/caddy for instance. Therefore, only this folder should be backed up as it's the hardest to recreate.

Some of my containers use SQLite, some start up a separate container for a database (MySQL). Copying the persistent data while the container is running could cause corruption. So, stopping and starting the container is necessary.

Finally, I want maximum flexibility so I want to be able to just copy the data in the filesystem.

Solution

In order to backup persistent data in a simple way, I would need to map the destination in the filesystem (rclone), have a simple server that can take requests and perform business logic (flask), have the whole thing be ephemeral (Docker) and finally be able to call a backup at any time (REST).

rclone

Rclone is an application that allows you to mount cloud storage as a logical drive and perform I/O operations against it. Since I am a Microsoft 365 subscriber, I chose to use OneDrive.

Flask Server

I wanted to have a container that would perform actions based on either RESTful command or cronjobs. So I created flask-cron-server which will spin up a Flask server at port 2128.

From there I was able to create server.py. This is the file that runs the Flask server and is executed on container startup:

It reads in a JSON file called /app/config/containers.json that defines all the containers along with the folder that should be archived and where that archive should be stored.

The call to backup is executed in a separate thread and a 202 Accepted response is sent to the caller to let them know that the command was received, but it is unknown how long it will take.

Finally, the whole thing is driven by a simple JSON

[
    {
        "container_name": "caddy",
        "source_folder": "/source/caddy",
        "destination_folder": "/destination/caddy",
        "retention_days": 7
    },
    {
        "container_name": "freshrss",
        "source_folder": "/source/freshrss",
        "destination_folder": "/destination/freshrss",
        "retention_days": 7
    },
    {
        "container_name": "guacamole",
        "source_folder": "/source/guacamole",
        "destination_folder": "/destination/guacamole",
        "retention_days": 7
    },
    {
        "container_name": "mosquitto",
        "source_folder": "/source/mosquitto",
        "destination_folder": "/destination/mosquitto",
        "retention_days": 7
    },
    {
        "container_name": "ps5-mqtt",
        "source_folder": "/source/ps5-mqtt",
        "destination_folder": "/destination/ps5-mqtt",
        "retention_days": 7
    },
    {
        "container_name": "slash",
        "source_folder": "/source/slash",
        "destination_folder": "/destination/slash",
        "retention_days": 7
    },
    {
        "container_name": "write-freely",
        "source_folder": "/source/writefreely",
        "destination_folder": "/destination/writefreely",
        "retention_days": 7
    },
    {
        "container_name": "homeassistant",
        "source_folder": "/source/ha",
        "destination_folder": "/destination/ha",
        "retention_days": 14
    }
]

Docker Container

The final step was to create the Docker container and stack

services:
  container-backup:
    image: zackwag/flask-container-backup:latest
    container_name: container-backup
    restart: unless-stopped
    ports:
      - 2128:2128
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /docker/container-backup/config:/app/config
      - /docker:/source
      - onedrive-backup:/destination
    environment:
      - PYTHONUNBUFFERED=1
      - TZ=America/New_York
volumes:
  onedrive-backup:
    driver: rclone
    driver_opts:
      remote: onedrive:backup
      allow_other: "true"
      vfs-cache-mode: writes
networks: {}

I'm in the timezone of New York, so you will need to change it to where you live.

Also, I made sure to include /var/run/docker.sock:/var/run/docker.sock:ro so that I could start and stop containers.

Finally, I followed the directions for Docker Volume Plugin. This allowed me to create the volume onedrive-backup that points to the /backup folder in OneDrive.

RESTfully Performing Backups

Now that the container is running I can simply call

curl -X POST [IP ADDRESS]:2128/backup

to backup all the containers specified in containers.json or just

curl -X POST {IP ADDRESS}:2128/backup/{container name}

To backup a specific container specified in containers.json.

I have setup automations that daily in Home Assistant, that call the main /backup endpoint to kick off backups.