Best way to run local LLM (AI) on silverblue?

Hey guys, im looking a way to run local llm in my laptop with silverblue, it has 16gb ram and 1650gtx (mobile, i think 6gb vram) i use nouveau

Basically i want to run local llm just to interact with my notes (md) that i use for study

Is there any easy way/recommendation to run on Sb ? or is that even possible with my setup ?

Thanks in advance !

Added atomic-desktops

Added nouveau

Just look at their install scripts. If they install to a local directory like ~/.local/bin then they work.

To my knowledge you can also run them in a Podman container even with NVIDIA drivers. These need to be userspace drivers afaik, but this would leave your host system clean.

If you want to run ollama podman is a great option. It is pretty easy to get started:

$ podman run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
$ podman exec -it ollama ollama run phi3:mini
>>> Hello!
 Hi there! How can I help you today?

Now this will by default run on the cpu, but it should be possible to get it working on your nvidia gpu. The dockerhub readme seems to have some documentation. Just keep in mind that because atomic-desktop is using podman the instructions might be a bit different, this page might be helpful. Can’t help you much with the gpu though, because I don’t own a nvidia gpu to test things on.

Another option might be Alpaca, a gui frontend for ollama that has a flatpak. I used it some time ago, but it seems like it’s still in early development.

i heard that cpu only is not good for these models. So you think is possible, podman/toolbox + nvidia to run these models ?

Depends on your cpu and the model you are running, the smaller ones work fine on a fast cpu. But yes, a gpu would be better.

Podman should be able to use the gpu, and the ollama container seems to support it. So it should work. Don’t know exactly how to get it working though, because I don’t have a nvidia gpu.

1 Like

thanks, i tried ALPACA flatpak here, but so slow, maybe is my system hardware thats is not capable enough.

Do you know if podman would be faster ? because if you say that gonna be the same, i will not try the effort of trying to setup nvidia on podman

Checkout Simon Willison’s CLI tool for interacting with LLM’s. It has support for local models, should be fine with Silverblue.

https://llm.datasette.io/en/stable/

He writes about it here

Here’s one. In Podman too