Hi all - recently moved my old docker setup across to Podman rootless containers, however I am having some trouble with getting my Plex container to use the on CPU hardware transcoding.

“/dev/dri” device is being passed into the container and after reading, I also added “–group-add=keep-groups” to my configuration.

Still no luck getting the “video” group to the plex user inside the container so it can access the device.

Anyone successfully running rootless Plex with H/W transcode?

  • aberrate_junior_beatnik@midwest.social
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    1 year ago

    You might need to also add --gidmap=n:{video_gid}:1, otherwise the host video group won’t have a matching group in the container’s user namespace. n can be any number you pick, so long as it doesn’t clash with an existing gid in the container. Unsure if --group-add=keep-groups does this already. You can check /proc/self/gid_map to see what is already being mapped.

    Of course the container user will need group n (from the gidmap flag above) either as primary or in the supplementary groups.

    [edit: I wrote this at 3am on my phone, and I misunderstood how the --gidmap flag works. This code won’t work, but I think the diagnosis is correct: there’s no mapping from the host’s video group to the container’s user namespace, see my other comment in reply to OP]

  • aberrate_junior_beatnik@midwest.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Ok, so I did some testing locally.

    Assuming podman is running as user user, which has a primary group gid 1000, the default /etc/subgid will look like:

    ...
    user:100000:65536
    ...
    

    Running podman run --rm -it busybox cat /proc/self/gid_map with this /etc/subgid the output is going to look like:

    0       1000          1
    1     100000      65536 
    

    Which means that if you bind mount a volume in and create a file with gid 0, it will have gid 1000 on the host. Subsequent gids will map to 100000,100001, etc. The group video, which should be gid 44, will map to 100043. So one option you have would be to add 100043 (or whatever gid 44 gets mapped to, if your /etc/subgid is different) to the acl of /dev/dri/cardX. Then if your plex user has group video in the container, you should be golden. No need to even have --group-add=keep-groups. Even the user running podman wouldn’t need to be in group video on the host.

    This is probably what you should do, because the following will change the behavior of every other container the user runs. But since I spent time hunting it down, I’m going to post the rest of this anyway:

    As it stands, your container will only have access to the mapped gids above. If you want to get group 44 in your container to map to group 44 on the host, some trickery will be necessary. First, you will have to change /etc/subgid from the above to

    ...
    user:44:1
    user:100000:65535
    ...
    

    Changing the count on the second line (65536->65535) isn’t strictly necessary, but in my testing podman did not like having uneven numbers of gids and uids. Also, the order doesn’t matter, you could switch the order of the lines, but podman is always going to interpret it in the order of the host gids. Finally, I had to reboot for the change here to register. There’s probably a way to do it otherwise, but logging out & back in didn’t do it, and neither did systemctl daemon-reload, so ¯\_(ツ)_/¯

    After this change, podman run --rm -it busybox cat /proc/self/gid_map will output something like:

    0       1000          1
    1         44          1
    2     100000      65535
    

    Ok, so now we have access to the host’s group 44, but it’s mapped to the container’s group 1. That’s not very useful, since gid 1 is traditionally the daemon group, which is used for other stuff. So here’s where --gidmap enters the picture and it gets confusing. With rootless podman, --gidmap does not take the format {container-id}:{host-id}:{count}. It takes the format {final-id}:{initial-id}:{count}. So if we want to map to the host’s id 44, we actually need to map to id 1, which then gets mapped to 44. So here’s the command:

    podman run --gidmap 0:0:1 --gidmap 44:1:1 --gidmap 1:2:43 --gidmap 45:45:65492 --rm -it busybox cat /proc/self/gid_map

    Which then should output:

     0          0          1
    44          1          1
     1          2         43
    45         45      65492
    

    Makes perfect sense, right? Well, it does make sense, but it’s (at least to me) confusing. What podman does is create a second, nested user namespace, and uses that to make the map. So nested 0 maps to initial 0 which maps to host 1000. Nested 44 maps to initial 1 which maps to host 44. Nested 1 maps to initial 2 which maps to host 100000. Nested 45 maps to initial 45 which maps to host 100044. And so on. So now you should be able to do

    rm -f fake-dri; touch fake-dri; chgrp video fake-dri; chmod 660 fake-dri
    podman run --gidmap 0:0:1 --gidmap 44:1:1 --gidmap 1:2:43 --gidmap 45:45:65492 --mount type=bind,src=$(pwd)/fake-dri,dst=/dev/dri --rm -it busybox sh -c 'echo I can write to /dev/dri > /dev/dri'
    cat fake-dri
    

    And it should output I can write to /dev/dri. I didn’t actually test it with plex, but if the issue really is permissions on /dev/dri/cardX, this should work.

      • SpaceNoodle@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        8
        ·
        1 year ago

        So you’re not even going to consider trying it without a container? What’s with the container madness?

        • falcon15500@lemmy.nine-hells.netOP
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          No madness. I know it works natively. I also know it works perfectly well with a rootfull container.

          All of my other applications are running in containers and having Plex also run in a container would simplify my overall architecture and recovery, should I need to replace the host.