Thanks, the bootstrapping idea was not mentioned in the comments, yet. And your blog looks promising, will have a more through look soon.
Thanks, the bootstrapping idea was not mentioned in the comments, yet. And your blog looks promising, will have a more through look soon.
Nice, thanks, again! I overlooked the dependency instructions in the container service file, which is why I wondered how the heck podman figures out the dependencies. It makes a lot of sense to do it like this, now that I think of it.
Awesome, so, essentially, you create a name.pod file like so:
[Unit]
Description=Pod Description
[Pod]
# stuff like PublishPort or networking
and join every container into the pod through the following line in the .container files:
Pod=name.pod
and I presume this all gets started via
systemctl --user start name.service
and systemd/podman figures out somehow which containers will have to be created and joined into the pod, or do they all have to be started individually?
(Either way, I find the documentation of this feature lacking. When I tested this stuff myself, I’ll look into improving it.)
I’ve wondered myself and asked here https://lemmy.world/post/20435712 – got some very reasonable answers
Thank you, I think the “less heavy than managing a local micro-k8s cluster”-part was a great portion of what I was missing here.
Understood, thanks, but if I may ask, just to be sure: It seems to me that without interacting with the kubernetes layer, I’m not getting pods, only standalone containers, correct? (Not that I’m afraid of writing kube configuration, as others have inferred incorrectly. At this point, I’m mostly curious how this configuration would be looking, because I couldn’t find any examples.)
Thank you for those very convincing points. I think I’ll give it a try at some point. It seems to me that what you’re getting in return for writing quadlet configuration in addition to the kubernetes style pod/container config is that you don’t need to maintain an independent kubernetes distro since podman and systemd take care of it and allow for system-native management. This makes a lot of sense.
One caveat I was (more or less actively) ignoring is that when the server shuts down, it stops powering the mikrotik and so I cannot access the BMC to restart it etc… On a related note, I am afraid that a remote session to the BMC tunnelled through the mikrotik will not survive a reboot of the machine, which might prevent me from getting to the BIOS screen, should I have to reconfigure something remotely.
Thanks 😀 But you hardly get to control what that CPU on your graphics card does the same way as you get control over the Linux machine that is this router, do you?
(Oh, and actually, my first and last discrete GPU was an ati 9600 xt or something from over twenty years ago, so, I guess that statement about my inexperience with it is still standing 😉 Until somebody comes along to tell me that the same could be said about raid controllers etc…)
Finally, I can give it a star, being only on gitlab and not on github