3 min read

Hosting a Private Docker Registry Natively on a Tiny VM (Backed by Unraid)

Hosting a Private Docker Registry Natively on a Tiny VM (Backed by Unraid)

In a world full of cloud-native solutions, it's easy to forget how powerful simple infrastructure can be. For this project, I set out to host my own Docker image registry — not in a container, not in Kubernetes — but as a native binary on a minimal VM, backed by NFS storage from my Unraid NAS.

This post walks through why I took this approach, how it works, and how you can replicate it in your own homelab or internal tooling setup.

Additional details in the repo's readme.md
https://github.com/18visions-ci/homelab-docker-registry


Why Not Just Use Docker Hub?

You can — but there are plenty of reasons not to:

  • CI/CD pipelines that build frequently can impose rate limits
  • Hosting internal tools or test images and no need for external storage
  • Full control, no cost surprises
  • It's just fun 🙃

Most people run Docker Distribution (aka the Docker Registry) in a container. But I wanted to try something a little cleaner and more transparent by running it as a systemd service on a tiny 1-core, 1 GB RAM VM.
Could I have run this in a container? Yes. But I'm hoping to achieve a little more durability and granularity by hosting it on a Ubuntu VM.


Architecture at a Glance

  • Registry: Native Go binary running on Ubuntu 22.04
  • Storage: Cache-only NFS share on my Unraid NAS (NVMe-backed)
  • Access: Behind Nginx Proxy Manager with HTTPS and basic auth
  • Automation: Deployed and maintained via Ansible roles

Step 1: Provisioning the VM

For stability during installation, I started with 2 GB RAM, then scaled it down to 1 GB after setup.

  • OS Disk: 10 GB (plenty if you're using NFS for storage)
  • OS: Ubuntu 22.04 server
  • Mounted the NFS share from Unraid at /mnt/registry-data
sudo mkdir -p /mnt/registry-data
sudo mount -t nfs <unraid_ip>:/mnt/user/docker-registry /mnt/registry-data

Tip: Add it to /etc/fstab safely like this:

<unraid_ip>:/mnt/user/docker-registry /mnt/registry-data nfs nofail,x-systemd.automount,_netdev 0 0

Step 2: Deploying with Ansible

I broke the playbook down into roles:

  • nfs_mount: mounts the NFS share and ensures nfs-common is installed
  • registry_install: downloads and installs the Docker Distribution binary
  • registry_config: drops a config file in /etc/docker/registry/config.yml
  • registry_service: installs a systemd unit and starts the service

Once grouped in a playbook, it’s just:

ansible-playbook -i inventory.yml playbook.yml

*For transparency I ran this in Semaphore, since I'm already keeping my secrets, variable groups, and inventories there. But if anyone is planning on a more simple approach, the above can be run, too.

Check that the service is live:

systemctl status docker-registry
curl http://localhost:5000/v2/
# returns: {}

Step 3: Exposing the Registry via Nginx Proxy Manager

Using NPM (running elsewhere in my homelab), I added a new proxy host:

  • Domain: registry.mydomain.com
  • Forward IP: the VM’s IP
  • Forward Port: 5000
  • Scheme: http
  • SSL: enabled via Let's Encrypt
  • Access List: optional for basic auth

Now the registry is safely exposed over HTTPS, with or without authentication.

Step 4: Testing Locally

Here’s how to test it from your local machine or CI runner:

# Authenticate if needed
docker login registry.mydomain.com

# Tag and push an image
docker pull alpine
docker tag alpine registry.mydomain.com/alpine
docker push registry.mydomain.com/alpine

# Remove and re-pull
docker rmi registry.mydomain.com/alpine
docker pull registry.mydomain.com/alpine

Want to explore what’s in the registry?

curl -s https://registry.mydomain.com/v2/_catalog | jq
curl -s https://registry.mydomain.com/v2/alpine/tags/list | jq

Final Thoughts

This was a great weekend project with practical value. I now have:

  • Faster CI/CD pipelines with no rate limits
  • Full control over my images
  • Minimal resource usage — the entire setup idles under 100MB of RAM

Next steps? I may add a basic UI or Prometheus metrics scraper — but for now, it’s stable, fast, and clean.