Just look at their install scripts. If they install to a local directory like ~/.local/bin then they work.
To my knowledge you can also run them in a Podman container even with NVIDIA drivers. These need to be userspace drivers afaik, but this would leave your host system clean.
If you want to run ollama podman is a great option. It is pretty easy to get started:
$ podman run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
$ podman exec -it ollama ollama run phi3:mini
>>> Hello!
Hi there! How can I help you today?
Now this will by default run on the cpu, but it should be possible to get it working on your nvidia gpu. The dockerhub readme seems to have some documentation. Just keep in mind that because atomic-desktop is using podman the instructions might be a bit different, this page might be helpful. Can’t help you much with the gpu though, because I don’t own a nvidia gpu to test things on.
Another option might be Alpaca, a gui frontend for ollama that has a flatpak. I used it some time ago, but it seems like it’s still in early development.
Depends on your cpu and the model you are running, the smaller ones work fine on a fast cpu. But yes, a gpu would be better.
Podman should be able to use the gpu, and the ollama container seems to support it. So it should work. Don’t know exactly how to get it working though, because I don’t have a nvidia gpu.