Hello, I am looking to use the ollama container run LLMs using podman. I have intalled nvidia container toolkit and it seems to only have access to the GPU if I add --security-opt=label=disable to the podman run command. Without it, I get the following error: Failed to initialize NVML: Insufficient Permissions.
How can I go about using my GPU with containers without disabling selinux? Any policy I have to install, or selinux configuration I need to change?
Not familiar with selinux, so I appreciate any guidance provided. Thank you!
This means that there are rules to allow the access, but that they are only effective if the container_use_xserver_devices boolean is enabled.
This can be done permanently with semanage boolean -m --on container_use_devices.
FYI this boolean is documented in container_selinux(8):
If you want to allow containers to use any xserver device volume mounted into container, mostly used for GPU acceleration, you must turn on the `container_use_xserver_devices` boolean. Disabled by default.