Skip to main content
← Back to Lab
infrastructurephilosophyproxmox

Why I Self-Host Everything

The reasoning behind running 35+ services on bare metal instead of reaching for SaaS.

The default answer is SaaS. Mine isn't.

Every tool I reach for starts with the same question: can I run this myself? Not because I enjoy the operational overhead — I do, but that's beside the point. The real reasons are control, cost, and compound learning.

Control

When you self-host, you own the data, the uptime, and the upgrade schedule. No surprise pricing changes. No feature removals. No "we're sunsetting this product" emails that force a migration on someone else's timeline.

I run Coolify for deployments, Uptime Kuma for monitoring, n8n for automation, Nginx Proxy Manager for routing, and a dozen other services across three Proxmox nodes. If something breaks, I fix it. If something needs to change, I change it. There's no support ticket queue between me and the solution.

Cost

The math is straightforward. Three mini PCs running Proxmox cost less per year than a single team seat on most SaaS platforms. The services I self-host — project management, CI runners, databases, monitoring, reverse proxies, DNS management — would cost thousands per month at commercial rates.

The hardware pays for itself in months. After that, it's essentially free compute.

Compound learning

This is the real return. Every service I deploy teaches me something about networking, storage, containers, security, or system design. That knowledge compounds. When I architect a client's infrastructure, I'm drawing from hands-on experience with the same tools at a smaller scale.

Self-hosting is my lab. The infrastructure I run at home directly informs the infrastructure I build for business.

The stack

  • Compute: 3x Proxmox nodes (2x Beelink SER7, 1x custom Ryzen build)
  • Storage: ZFS mirrors on each node, TrueNAS for NFS shares
  • Networking: TP-Link Omada (managed switches, APs, gateway) with VLANs
  • Remote access: Tailscale mesh + Cloudflare tunnels (zero open ports)
  • Monitoring: Uptime Kuma + Better Stack for external checks
  • Deployment: Coolify (self-hosted PaaS) + GitHub Actions

When I wouldn't self-host

Self-hosting isn't always the answer. I use Vercel for static sites where edge caching matters. I use Supabase when I need managed Postgres with auth and real-time out of the box. The decision is always pragmatic: if the managed service solves the problem better and the lock-in is acceptable, use it.

But for core infrastructure — the things that run my business — I want the keys.