Almost 3 years ago, I paid for a few VPSs on which I host a variety of services. (Vaultwarden, gitea, drone, meshcentral, metabase, gptresearcher, etc)

Interspersed among the VPSs are a series of data processing containers to handle crypto data.

With the contract coming up for renewal, I’m exploring how to separate the hardware from the software so I’d only need to deploy the container to a pool of servers, and the infrastructure decides on which server to run the container, correctly route incoming requests, and update cloudflare dns for containers which are meant to be oublicly facing.

I went through the kubernetes the hard way tutorial and have a cursory understanding of kubernetes but with some substantial gaps which I couldn’t Google away.

For the replacement platform, I’m thinking to:

- Combine multiple VPSs as a baseline cluster to run internet-facing loads

- Use some home servers for backend/non-internet facing processes and make the data available on the Internet facing hosts.

- Add the ability to dynamically add more VPSs or preemptible instances from GCP/AWS

I’m still stuck on the first part. Standing up a kubernetes cluster using multiple VPS with different public IPV4 addresses.

Googling around heavily suggests this is not a common use case. Or at least I’m not using the correct terms.

Is there a better solution for me to pursue?

  • WiseCookie69@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Look at K3s. Since a while it has built-in support for Tailscale (can also use Headscale).

    Alternatively, it doesn’t really matter how or where your nodes are located, if you add a VPN to allow them to talk to each other.

    Your main issue would be storage. But that’s easily fixed with a topology aware CSI and then keeping your stateful workloads either wherever they got their volumes provisioned, or forcing them to be provisioned on your home servers.

    • Qxt78@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      That is what rook-ceph or longhorn is for. Longhorn is good for beginners

    • hardyrekshin@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Good point about Tailscale. It hadn’t occurred to me.

      How does DNS work for a Tailscale-networked cluster of servers?

      My understanding is at least one of the nodes would need to be designated as the ingress. I could potentially also have all the master nodes hold the ingress, but then I believe that means I’d need to use round robin DNS in cloudflare to ensure the domains are always pointing at the cluster.

      Storage might be a problem, but being more cloud aware potentially means I can run DuckDB against minio to scan S3 objects when doing data-intensive tasks.

  • adamshand@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    As another option why not docker swarm? I think it will do what you need and is much simpler than k8s.

    I’ve been using caprover and really liking how simple and reliable it’s been. In the process of going from a single node to a swarm, still researching / experimenting but liking it so far!

    • hardyrekshin@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      I read into Docker Swarm, the community surrounding it is much smaller and there appear to be far fewer tutorials compared with k8s.

      In addition to hosting the usual self-hosted applications that are often mentioned, I also do a good amount of processing of crypto and other financial data. Being able to more dynamically add/remove servers and have the containers automatically assigned / routed should at least in theory save me a lot of time administering the infrastructure.

  • natermer@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    The problem with hosting kubernetes on VPSes is that exposing the Kubernetes API to the public is pretty sketchy. I know a lot of people do it, but I don’t like the idea.

    I also like having multiple smaller Kubernetes clusters then a single big one. Easier to manage and breakage is more isolated. You can incorporate external services into kubernetes pretty easily using kubes services and endpoints.

    I suggest using K3s as it is very lightweight, easy to deploy, and is k8s compliant. There are default set of services k3s deploys by default and are designed for more ‘IOT’ applications. Things like service-lb. These things can be disabled if you want during install time.

    For managing it I like to ArgoCD on a ‘administrative’ kubes cluster local to you. It has no problem connecting to multiple clusters and has a nice declarative yaml files for configuring things that work well with a git-based workflow. The web UI is nice and is used widely.

    • hardyrekshin@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      What’s your take on inter-cluster communication?

      E.g. I could hypothetically have 3 clusters:

      1. Administrative
      2. Web-Facing
      3. Backend

      Potential use cases:

      • Backend might produce updated parquet files which needs to be transferred and made accessible on metabase in the Web-Facing cluster.
      • Web-Facing might need to send batched inputs (from webhooks for example) to Backend for processing.
      • ionfury@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        Service mesh is what you’re looking for here. Istio is a front runner in this space.

        Without knowing more of your use case though multi cluster is really adding a lot of complexity here and I’m not sure what you’re getting over, say, namespaces and network policy.

  • s_busso@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    The cheapest and most straightforward solution would be to use Docker Swarm. You will benefit from the cluster without the cost or complexity of Kubernetes. Much less to learn, reuse the docker compose. Here is my go-to stack for similar needs: Swarm, Portainer, Caddy.