I can't run ollama & llama-gpt in podman

It is easy to install in a docker container but i can’t do

Can anyone help.

What have you tried?

What did you expect to happen?

What actually happened?


Everything goes smoothly but the issue happend that

docker exec -it ollama ollama run llama2

It said no contrainer found.
But if i run
Docker run -it ollama
It run but does not show any prompt to install the model can you help i think if you can manage to install that on your system you can help me. Figure this out.

docker container start ollama
docker container list -a
docker exec ...
1 Like

Done thanks.

1 Like

Btw anyone wants to run llama locally. Try this
Until unless fedora give an ai.

If i use podman instead of docker that does not work

This works for me on Fedora 40:

podman run -d -v ollama:/root/.ollama \
    -p 11434:11434 --name ollama ollama/ollama
podman container start ollama
podman exec -it ollama ollama run llama2

Probably you are doing something wrong.


I have deleted all the images and start new and now i run and i am also surprised to see it works.

If you don’t have a lot of ram you can use a smaller option like gemma:2b just change the last llama2 with this.

1 Like

Can’t install this GitHub - getumbrel/llama-gpt: A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device. New: Code Llama support!

I find an error that -f is not a flag how to fix it.

This is how it should work:

sudo dnf -y install podman-docker podman-compose
sudo setenforce 0
git clone https://github.com/getumbrel/llama-gpt.git
cd llama-gpt
./run.sh --model 7b

However, the deployment takes quite a lot of time.

I will try and let you know.

Added llm, ollama

This tool makes it easy if it helps:

I am using brew to simplify things.

brew install ollama