Skip to content

Instantly share code, notes, and snippets.

@peterwwillis
Last active February 5, 2026 11:54
Show Gist options
  • Select an option

  • Save peterwwillis/abe0384bcbd4f8e32f91f04cbb56cf85 to your computer and use it in GitHub Desktop.

Select an option

Save peterwwillis/abe0384bcbd4f8e32f91f04cbb56cf85 to your computer and use it in GitHub Desktop.
Installing Ollama and OpenWebUI on Ubuntu 24.04

Notes

  • you should first add your user to the following Unix groups: video, render, docker, ollama

    • sudo usermod -a -G render,video $LOGNAME
  • make sure user ollama is added to groups video and `render

  • install Docker (ideally in a dedicated VM; see https://gist.github.com/peterwwillis/e2b37e5dd502fd7ffc3833f56feade1e)

  • this guide installs the latest AMD drivers, but that might be too new of a version of ROCm for ollama (their guide says to use the AMD ROCm v6 tools). the latest AMD ROCm tools actually don't completely install on my Ubuntu 24.04 system due to package conflicts, but some of them install.

  • check journalctl -u ollama.service | cat for warnings about not finding the GPU

    • if it says it can't find the GPU, or that its using only CPU for inference, and you use AMD GPUs (or don't use Nvidia), and want to try Vulkan, do this:
    • sudo systemctl edit ollama.service
    • Add the following:
      [Service]
      Environment=OLLAMA_VULKAN=1
      Environment=CUDA_VISIBLE_DEVICES=
      Environment=GGML_CUDA_INIT=0
      Environment=ROCR_VISIBLE_DEVICES=
      Environment=HIP_VISIBLE_DEVICES=
      Environment=HSA_OVERRIDE_GFX_VERSION=
      
    • sudo systemctl restart ollama
    • check logs again
  • if you use vulkan support for ollama (which is experimental):

    • check vulkaninfo --summary for GPU devices
    • if necessary, sudo systemctl edit ollama.service and add an Environment=GGML_VK_VISIBLE_DEVICES=0 section, with 0 here denoting a specific GPU device

Installation

Run the following commands:


# AMD GPU relevant stuff
if lsmod | grep amdgpu ; then
    echo "Driver amdgpu loaded; continuing"
elif lspci | grep -i radeon ; then
    echo "Press enter to uninstall amdgpu drivers, or Ctrl+C to cancel..."
    read INPUT
    sudo apt remove amdgpu-install
    sudo apt autoremove amdgpu-dkms
    sudo rm /etc/apt/sources.list.d/amdgpu.list
    sudo rm -rf /var/cache/apt/*
    sudo apt clean all
    sudo apt update
    echo "Installing AMDGPU drivers..."
    echo ""
    curl -fsSL -o amdgpu-install_7.2.70200-1_all.deb https://repo.radeon.com/amdgpu-install/7.2/ubuntu/noble/amdgpu-install_7.2.70200-1_all.deb
    if ! echo "9b9127cfbcffd20c6e1a8a080c3bb2977db22b7bbf82d7c406056c2a507cb17e  amdgpu-install_7.2.70200-1_all.deb" | sha256sum -c - ; then
        echo "PACKAGE CHANGED! EXITING"
        sleep 5
        exit 1
    fi
    sudo apt install ./amdgpu-install_7.2.70200-1_all.deb
    sudo apt update
    sudo apt install python3-setuptools python3-wheel
    sudo apt install rocminfo rocm-smi rocm-hip-runtime
    sudo apt install "linux-headers-$(uname -r)" "linux-modules-extra-$(uname -r)"
    sudo apt install amdgpu-dkms
fi


# NVIDIA GPU relevant stuff
if lspci | grep -i nvidia ; then
    sudo apt install -y nvidia-driver-580
    sudo modprobe nvidia
    if ! lsmod | grep -E '^nvidia|^nouveau' ; then
        sudo dmesg -T | tail -n 30
        echo ""
        echo "WARNING: No Nvidia driver loaded!"
        sleep 5
    fi
fi

# Install Ollama AMD ROCM binaries
curl -fsSL -o ollama-linux-amd64-rocm.tar.zst https://ollama.com/download/ollama-linux-amd64-rocm.tar.zst
if ! echo "4217de438668a062993d1757e5db110d58b5f4c493c6e6cf2f4d7bb67723929d  ollama-linux-amd64-rocm.tar.zst" | sha256sum -c - ; then
    echo "INSTALL TARBALL HAS CHANGED! EXITING"
    sleep 5
    exit 1
fi
sudo tar -C /usr/local -xvf ollama-linux-amd64-rocm.tar.zst

# Install Ollama itself
curl -fsSL -o ollama-install.sh https://ollama.com/install.sh
if ! echo "46d68406aaa672e974862423059d32cfb0624b167de843b0d136504d2208aa2f  ollama-install.sh" | sha256sum -c - ; then
    echo "INSTALL SCRIPT HAS CHANGED! EXITING"
    sleep 5
    exit 1
fi
sh ollama-install.sh
systemctl start ollama

sleep 10

# For NVidia, possibly
#if journactl -u ollama | grep 'msg="failure during GPU discovery"' ; then
#    mkdir -p /etc/systemd/system/ollama.service.d
#    cat > '/etc/systemd/system/ollama.service.d/override.conf' <<'EOTEXT'
#[Service]
#Environment="HSA_OVERRIDE_GFX_VERSION=11.0.2"
#EOTEXT
#    systemctl restart ollama
#fi


# Pull a small local model
ollama pull llama3.2

echo ""
echo "We will now run Open WebUI with Docker."
echo "When it's finished loading you will see an ASCII art logo."
echo "At that point, open browser to http://localhost:3000/ and create a new account."

docker run --rm -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main

Links:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment