Run ai locally

Run AI MODEL locally offline without internet.
If you don’t trust chatgpt or google gemini with your data the only way is to run that locally on your system.

Go to

Install ollama on your system

curl -fsSL https://ollama.com/install.sh | sh

And then choose your own model

https://ollama.com/library
ollama run gemma

or

ollama run llama

And so on
Manual installation method

https://github.com/ollama/ollama/blob/main/docs/linux.md
I recommend gemma llama or Mixtral
Choose 2b 7b 13b 34b or 70b as per your hardware and ram.

2 Likes

Also Google made “Gemma” opensource.

The real question though is, how to train LLMs on needed datasets and how to remove censoring.

Trained datasets are needed to drastically reduce complexity and resource usage. For example models for creating code in a single coding language plus a spoken language.

Censoring is annoying because it barely works, machine learning just doesnt really allow censoring (you need to censor the datasets that go in instead) and censoring is bad on performance (afaik) and quality of results.

curl -fsSL https://ollama.com/install.sh | sh

How to install on Silverblue? That script does not setup the ollama user and group properly. Nor can it write to the /usr/share/ollama folder. Which doesn’t get created. I tried 'rpm-ostree install ollama-linux-amd64, but get an error.

error: Packages not found: ./ollama-linux-amd64

Any help would be greatly appreciated.

i think on silverblue easiest way is use distrobox and example create fedora workstation container and install Ollama there and export the app to silverblue desktop

1 Like

Fedora Silverblue already has Podman and Toolbox. Can’t see myself installing another container tool. Researching how to get it running under Podman right now. I have the Open WebUI running in Podman.

toolbox dont offer export app feature, but yes it is already there i personally like distrobox much more with podman

1 Like

I do run apps in toolboxes like neovim using toolbox run --container fedora-develop-39 nvim without entering the toolbox.

1 Like

The problem here, when I installed ollama in the toolbox it ran, but couldn’t find my GPU.

i really dont know does toolbox have passthrough option i do know distrobox has so it will share host GPU drivers aka passthrough

it has been released on june 2023 version 1.5

A new version, 1.5, has been released with the initial support for NVIDIA GPU containers, allowing Distrobox to share the host’s drivers with the container environment. This feature has been successfully tested on Ubuntu 22.04 and newer, Arch Linux, Fedora, RHEL/CentOS, and other major Linux distributions.

Does Distrobox allow this without using rootful mode? That would be awesome.

And another point for Distrobox. Really, please just use it, its great.