Run ai locally

Run AI MODEL locally offline without internet.
If you don’t trust chatgpt or google gemini with your data the only way is to run that locally on your system.

Go to

Install ollama on your system

curl -fsSL https://ollama.com/install.sh | sh

And then choose your own model

https://ollama.com/library
ollama run gemma

or

ollama run llama

And so on
Manual installation method

https://github.com/ollama/ollama/blob/main/docs/linux.md
I recommend gemma llama or Mixtral
Choose 2b 7b 13b 34b or 70b as per your hardware and ram.

3 Likes

Also Google made “Gemma” opensource.

The real question though is, how to train LLMs on needed datasets and how to remove censoring.

Trained datasets are needed to drastically reduce complexity and resource usage. For example models for creating code in a single coding language plus a spoken language.

Censoring is annoying because it barely works, machine learning just doesnt really allow censoring (you need to censor the datasets that go in instead) and censoring is bad on performance (afaik) and quality of results.

curl -fsSL https://ollama.com/install.sh | sh

How to install on Silverblue? That script does not setup the ollama user and group properly. Nor can it write to the /usr/share/ollama folder. Which doesn’t get created. I tried 'rpm-ostree install ollama-linux-amd64, but get an error.

error: Packages not found: ./ollama-linux-amd64

Any help would be greatly appreciated.

i think on silverblue easiest way is use distrobox and example create fedora workstation container and install Ollama there and export the app to silverblue desktop

1 Like

Fedora Silverblue already has Podman and Toolbox. Can’t see myself installing another container tool. Researching how to get it running under Podman right now. I have the Open WebUI running in Podman.

toolbox dont offer export app feature, but yes it is already there i personally like distrobox much more with podman

1 Like

I do run apps in toolboxes like neovim using toolbox run --container fedora-develop-39 nvim without entering the toolbox.

1 Like

The problem here, when I installed ollama in the toolbox it ran, but couldn’t find my GPU.

i really dont know does toolbox have passthrough option i do know distrobox has so it will share host GPU drivers aka passthrough

it has been released on june 2023 version 1.5

A new version, 1.5, has been released with the initial support for NVIDIA GPU containers, allowing Distrobox to share the host’s drivers with the container environment. This feature has been successfully tested on Ubuntu 22.04 and newer, Arch Linux, Fedora, RHEL/CentOS, and other major Linux distributions.

Does Distrobox allow this without using rootful mode? That would be awesome.

And another point for Distrobox. Really, please just use it, its great.

Install the Alpaca software. This is an ollama derivative. With Alpaca you can download dozens of AI engines suitable for your machine. Turn your machine into a local AI assistant.

Flathub URL to install Alpaca

2 Likes

With some manual tinkering you can get it to work.

  • download the install.sh file
  • edit install.sh and replace /usr/share with /usr/local/share
  • add the following to the ollama.service file at the [service] section that will be created:
PermissionsStartOnly=true
PrivateMounts=true
ExecStartPre=mount --bind /usr/local/share /usr/share
  • run the installer
  • create the directory /usr/local/share/ollama and let it be owned by ollama
mkdir -p /usr/local/share/ollama
chown ollama:ollama  /usr/local/share/ollama
  • reload and restart ollama
systemctl daemon-reload && systemctl restart ollama

Now the ollama service should be running and not complaining about non accessible directories.

(I went this route because of troubles getting cuda to work in a container)

Instead of those changes you could also try to install ollama in your home dir and create a systemd-user-service.

I’m beyond against any software that has curl like that as an official install instruction.


I don’t know a lot about self-hosting AI, but NVIDIA’s Jetson Orin Nano sounds like something powerful I could buy and drop-into my network. Not sure what model I’d pick, but it looks like there’s options :smiley:

1 Like

Oh well… :sweat_smile:
Screenshot From 2025-01-29 23-15-07

I’m fairly confident in the walled gardens of Fedora and Mozilla, so I can suggest Mozilla AI Guide - Running LLMs Locally

anyone running deepseek r1 locally ?

Yeah, just use Alpaca as suggested aboce and download the correct model.

2 Likes

Alpaca doesnt allow all ollama models afaik because it has no simple terminal run command support?

Otherwise it works great

i find better results with qwen 14b and 32b
but i mostly use 14 b as it is easy to run