Ollama on atomic distro?

How would you run ollama on an atomic distro? In a podman container, in a toolbox container or perhaps as a layered installation?

I have managed to install nvidia drivers but I’m unsure what to do next. I suppose I should try nvidia container toolkit but if I want to use a podman container I would need to layer it? If I run in toolbox, will I have access to the GPU the same way as if I run on the host?

I think it depends on your use case. Although Ollama is available in the Fedora repository, it is quite outdated. So layering it can be a serious limitation if you plan to run newer models.

1 Like

Best way is to use toolbox qnd you hqve gpu and everything working

There isn’t a performance overhead by running in a toolbox container? Would I still need the nvidia container toolkit to get the best performance? Can I run the container in the background when the computer starts so I have ollama as a background service?

No need toolkit it is there already overhead is really small you dont basically need it and you can create script etc to start container on boot