In the What are YOU self-hosting? thread, there are a lot of people here who are self-hosting a huge number of applications, but there’s not a lot of discussion of the platform these things run on.

What does your self-hosted infrastructure look like?

Here are some examples of more detailed questions, but I’m sure there are plenty more topics that would be interesting:

  • What hardware do you run on? Or do you use a data center/cloud?
  • Do you use containers or plain packages?
  • Orchestration tools like K8s or Docker Swarm?
  • How do you handle logs?
  • How about updates?
  • Do you have any monitoring tools you love?
  • Etc.

I’m starting to put together the beginning of my own homelab, and I’ll definitely be starting small but I’m interested to hear what other people have done with their setups.

  •  rs5th   ( @rs5th@lemmy.scottlabs.io ) 
    link
    fedilink
    English
    5
    edit-2
    1 year ago

    My setup is a mix of on-prem and VPS.

    On-Prem

    • Primary Cluster (24 cores, 192 GB RAM, 36 TB usable storage)
      • Two Dell R610 (12 cores, 96 GB RAM each)
      • vSphere 7.0, ESXi 6.7 (because processors are too old for 7.0)
      • Kubernetes 1.24
        • Single controller VM
        • Two worker VMs
        • OS: Ubuntu 20.04
        • K8s Flavor: Kubeadm
      • Use: Almost everything
      • Storage:
        • Synology 1515 (11 TB usable, RAID 5) - vSphere datastore via NFS
        • Synology 1517 (25 TB usable, RAID 5) - Kubernetes mounts via NFS, media, general NAS stuff
    • Standalone Node (4 cores, 16GB RAM, 250 GB SSD)
      • Lenovo M900 micro-PC
      • OS: Ubuntu 22.04
      • Kubernetes 1.24
      • K8s Flavor: k3s
      • Use: provide critical network services (DNS/DHCP) if any part of the complex cluster goes down, Frigate due to USB Coral TPU plugged in here
    • Networking / Other
      • DNS:
        • Primary: AdGuard Home running on Standalone
        • Internal domain: BIND VM running in Primary Cluster
      • Firewall: Juniper SRX 220H
      • Switch: Juniper EX2200-48
      • WiFi: 3x Unifi In-Wall APs
      • Power:
        • UPS backing compute and storage (10-15 min runtime)
        • UPS backing networking gear (15-20 minute runtime)

    VPS

    • Single Linode (2 cores, 4 GB RAM, 80 GB storage)
      • OS: Ubuntu 22.04
      • Kubernetes 1.24
      • K8s Flavor: k3s
      • Use: UptimeKuma to monitor on-prem infrastructure, services that can’t go down due to home ISP or power issues (like family RocketChat).

    Every service (except Plex) is containerized and running in Kubernetes. Plex will be migrated soon™. Everything in Kubernetes is handled via Infrastructure as Code using FluxCD and GitOps principles. Secrets are stored in git using Mozilla SOPS for encrypt/decrypt. Git repos are currently hosted in GitHub, but I’m considering Gitea, though that might present a bit of a bootstrapping problem if all the infrastructure that hosts Gitea is declared inside Gitea…

    • Git repos are currently hosted in GitHub, but I’m considering Gitea …

      I have a similar-ish setup to this and landed on having Gitea hosted outside the cluster using plain docker-compose along with renovate for dependency updates within the cluster. These two containers configuration files live on Github and have Github’s renovate-bot keep them up to date. Changes get picked up through polling the git repo and applied to the host through Portainer, though I’m planning to swap that out with a slightly more complex solution relying solely on Gitea’s new CI/CD tooling.