The source code for this blog is available on GitHub.
Cover Image for Setting Up Ollama with Open WebUI in Docker: A Complete Guide

Setting Up Ollama with Open WebUI in Docker: A Complete Guide

Cody Turk
Cody Turk

Ollama is an open-source tool that makes it easy to run large language models locally on your machine. It handles the complexities of model serving and inference so you don't have to.

Open WebUI provides a clean, intuitive interface for interacting with your LLMs. Think of it as the friendly face that lets you chat with models, manage your collection, and get the most out of your local AI setup.

Why Run LLMs Locally?

Before we dive into the setup, you might be wondering: why bother running models locally when there are plenty of cloud options?

  • Privacy: Your conversations stay on your machine
  • No subscription fees: Once set up, it's completely free
  • Customization: Full control over which models you use
  • Learning opportunity: Great way to understand how LLMs work

Prerequisites

  • Docker and Docker Compose installed on your system
  • Basic familiarity with terminal/command line
  • A computer with reasonable specs (8GB+ RAM recommended)

Step 1: Set Up Your Project Directory

First, create a dedicated directory for your Ollama setup:

mkdir ollama-webui
cd ollama-webui

Step 2: Create Your Docker Compose File

Create a new file named docker-compose.yml in your project directory and add the following configuration:

version: '3.8'

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    volumes:
      - ollama_data:/root/.ollama
    ports:
      - "11434:11434"
    restart: unless-stopped

  open-webui:
    image: ghcr.io/open-webui/open-webui:main
    container_name: open-webui
    volumes:
      - open_webui_data:/app/backend/data
    ports:
      - "3000:8080"
    environment:
      - OLLAMA_BASE_URL=http://ollama:11434
    depends_on:
      - ollama
    restart: unless-stopped

volumes:
  ollama_data:
  open_webui_data:

Step 3: Launch Your Containers

With your Docker Compose file in place, starting the services is as simple as:

docker-compose up -d

The -d flag runs the containers in detached mode (in the background). The first time you run this command, Docker will download the necessary images, which might take a few minutes depending on your internet connection.

Step 4: Access the Interface

Once your containers are up and running, open your web browser and navigate to:

http://localhost:3000

On your first visit, you'll be prompted to create an admin account. This account will be used to access the Open WebUI interface.

Step 5: Download Your First Model

In the current version of Open WebUI, downloading models directly through the interface can be challenging. The most reliable method is to use the command line:

docker exec -it ollama ollama pull llama3

Replace llama3 with the name of the model you want to download. After downloading the model, refresh the Open WebUI interface to see your model appear in the model selector.

Step 6: Start Chatting

With your model downloaded:

  1. Go to the chat interface in Open WebUI
  2. Use the dropdown in the top-left corner to select your model
  3. Start chatting with your locally-running AI!

Setting up Ollama with Open WebUI using Docker offers Iowa businesses and individuals a fantastic opportunity to explore powerful AI capabilities locally. Enjoy complete privacy, customization, and cost savings while enhancing your tech capabilities. If you need assistance with this setup or other tech services, including professional videography, feel free to reach out.

RSS Feed
Share: