Run AI MODEL locally offline without internet.
If you don’t trust chatgpt or google gemini with your data the only way is to run that locally on your system.
Go to
Install ollama on your system
curl -fsSL https://ollama.com/install.sh | sh
And then choose your own model
https://ollama.com/library
ollama run gemma
or
ollama run llama
And so on
Manual installation method
https://github.com/ollama/ollama/blob/main/docs/linux.md
I recommend gemma llama or Mixtral
Choose 2b 7b 13b 34b or 70b as per your hardware and ram.
The real question though is, how to train LLMs on needed datasets and how to remove censoring.
Trained datasets are needed to drastically reduce complexity and resource usage. For example models for creating code in a single coding language plus a spoken language.
Censoring is annoying because it barely works, machine learning just doesnt really allow censoring (you need to censor the datasets that go in instead) and censoring is bad on performance (afaik) and quality of results.
How to install on Silverblue? That script does not setup the ollama user and group properly. Nor can it write to the /usr/share/ollama folder. Which doesn’t get created. I tried 'rpm-ostree install ollama-linux-amd64, but get an error.
i think on silverblue easiest way is use distrobox and example create fedora workstation container and install Ollama there and export the app to silverblue desktop
Fedora Silverblue already has Podman and Toolbox. Can’t see myself installing another container tool. Researching how to get it running under Podman right now. I have the Open WebUI running in Podman.
i really dont know does toolbox have passthrough option i do know distrobox has so it will share host GPU drivers aka passthrough
it has been released on june 2023 version 1.5
A new version, 1.5, has been released with the initial support for NVIDIA GPU containers, allowing Distrobox to share the host’s drivers with the container environment. This feature has been successfully tested on Ubuntu 22.04 and newer, Arch Linux, Fedora, RHEL/CentOS, and other major Linux distributions.
Install the Alpaca software. This is an ollama derivative. With Alpaca you can download dozens of AI engines suitable for your machine. Turn your machine into a local AI assistant.
I’m beyond against any software that has curl like that as an official install instruction.
I don’t know a lot about self-hosting AI, but NVIDIA’s Jetson Orin Nano sounds like something powerful I could buy and drop-into my network. Not sure what model I’d pick, but it looks like there’s options