2 min read

Presence Reporting with Python, Prometheus, and CI/CD Automation

Presence Reporting with Python, Prometheus, and CI/CD Automation

One of my personal goals this year has been to build more of my own internal services. I wanted something small but meaningful—something that integrates DevOps practices with Python application development, and also provides observable, testable outputs. That led me to create a lightweight presence reporting service that scratches several itches at once: custom service deployment, observability integration, and end-to-end automation with GitHub Actions and Jenkins.

What Is It?

At its core, this service listens for HTTP POSTs from a login script I run when logging into my Mac or Windows workstation. That script hits a /proxmox-status endpoint that triggers the service to collect resource data from my homelab’s Proxmox VE and Proxmox Backup Server instances. That data is summarized and sent to a Discord channel via webhook.

To make the service observable and Prometheus-friendly, I also exposed a /status endpoint:

@app.get("/status")
def get_status():
    return {"status": "up", "version": os.getenv("IMAGE_TAG", "unknown")}

This allows me to scrape this endpoint using Blackbox Exporter, giving me confidence that the container is up and returning valid responses.

Build, Test, and Deploy

The project is broken into two GitHub repos:

  • Public repo: contains the application source code, tests, and Dockerfile.
  • Private repo: houses my GitHub Actions workflows used for automated deployments.

Repo Structure (Code)

.
├── Dockerfile
├── Jenkinsfile
├── README.md
├── requirements.txt
├── src
│   ├── __init__.py
│   ├── discord_notifier.py
│   ├── logger.py
│   ├── main.py
│   └── proxmox_utils.py
└── tests
    ├── __init__.py
    └── test_proxmox_utils.py

Unit tests run during Jenkins builds to ensure proxmock_utils.py continues to parse and return valid data structures from the Proxmox APIs.


Deployment: GitHub Actions + SSH + Docker Compose

The private repo uses a workflow_dispatch-triggered GitHub Action that deploys the service to a Docker host in my homelab. The workflow:

  • Connects via SSH to the Docker host
  • Copies the Docker Compose file
  • Sets environment variables for Proxmox and PBS credentials
  • Pulls the latest image from my internal Docker registry
  • Starts the container

Here’s a shortened view of the core workflow logic:

- name: Run Docker Compose
  run: |
    ssh -i ~/.ssh/deploy_key ${{ env.SSH_USER }}@${{ env.DOCKER_PROD }} << EOF
      cd ~/homelab-presence-reporter
      export LOGSTASH_HOST="${{ vars.LOGSTASH_HOST_PROD }}"
      ...
      docker compose pull
      docker compose up -d
    EOF

Observability: Logstash + GELF + Prometheus

The container uses Docker’s gelf logging driver to forward structured logs to Logstash, giving me a searchable timeline of every login report and Discord notification.

Additionally, the /status endpoint lets me integrate with Prometheus and Blackbox Exporter. This small endpoint gives me a clean heartbeat signal, which is perfect for my use case.


Why This Matters

This project has helped me:

  • Reinforce Python fundamentals with a real-world application
  • Write unit tests and integrate them into Jenkins builds
  • Explore GitHub Actions workflows for infrastructure-as-code-style deployments
  • Build toward a larger goal of internal tools that improve my homelab and observability stack

I now have a reproducible pattern I can use to deploy future Python-based services with CI/CD, structured logging, and Prometheus compatibility baked in.


This may be a small service, but it encapsulates everything I want to keep growing in: automation, observability, Python, and production-ready DevOps practices—even in the homelab.