A lightweight and efficient UI for interacting with Ollama models locally. This application provides a simple yet powerful interface for chatting with AI models through Ollama.
- 📱 Real-time message streaming
- 🧠 View AI thinking process
- 💬 Conversation history
- 🚀 Multiple model support
- 🔗 Custom Ollama URL configuration
- 💾 Persistent storage with SQLite
- Ollama running locally or on a network-accessible machine
- Docker (for container method)
- Go and Node.js (for local build method)
By default, Ollama only listens on localhost (127.0.0.1), which makes it inaccessible from Docker containers. To allow connections from containers or other machines, you need to configure Ollama to listen on all interfaces:
OLLAMA_HOST=0.0.0.0:11434 ollama serve
or you can add this to your .bashrc or .zshrc file:
export OLLAMA_HOST=0.0.0.0:11434
This makes Ollama accessible from other machines and containers by binding to all network interfaces instead of just localhost.
The easiest way to get started is to pull the pre-built image from the GitHub Container Registry:
docker run -p 8080:8080 -v chat-data:/app/data ghcr.io/anishgowda21/tiny-ollama-chat:latest
Alternatively, you can build the Docker image locally:
docker build -t tiny-ollama-chat .
run the container with:
docker run -p 8080:8080 -v chat-data:/app/data tiny-ollama-chat
The Docker container supports configuration through environment variables:
PORT
: Server port (default: 8080)OLLAMA_URL
: Ollama API URL (default: http://host.docker.internal:11434)DB_PATH
: Database path (default: /app/data/chat.db)
Example with custom settings:
docker run -p 9000:9000 \
-e PORT=9000 \
-e OLLAMA_URL=http://host.docker.internal:11434 \
-e DB_PATH=/app/data/custom.db \
-v chat-data:/app/data \
tiny-ollama-chat
Options for connecting the Docker container to Ollama:
-
Use the Docker host's IP address:
- On Linux:
-e OLLAMA_URL=http://172.17.0.1:11434
(Docker's default bridge gateway) - On macOS/Windows:
-e OLLAMA_URL=http://host.docker.internal:11434
(This is the default URL you don't need to pass this)
- On Linux:
-
Use the host network:
docker run --network=host tiny-ollama-chat
If you prefer to build and run the application directly:
The repository includes a build script that handles the entire build process:
# Make the script executable
chmod +x buildlocal.sh
# Run the build script
./buildlocal.sh
This script:
- Creates a build directory
- Builds the client with npm
- Builds the server with Go
- Places everything in the build directory
After building:
cd build
./tiny-ollama-chat
The server supports several command line flags:
-port=8080
: Set the port for the server to listen on (default: 8080)-ollama-url=http://localhost:11434
: Set the URL for the Ollama API (default: http://localhost:11434)-db-path=chat.db
: Set the path to the SQLite database file (default: chat.db)
Example with custom settings:
./tiny-ollama-chat -port=9000 -ollama-url=http://192.168.1.100:11434 -db-path=/path/to/database.db
If the application cannot connect to Ollama:
- Verify Ollama is running:
ps aux | grep ollama
- Check that the Ollama URL is correct in your configuration
- Ensure network connectivity between the container and Ollama
- If using Docker, make sure you've configured Ollama to be accessible as described above
- Open the application in your browser
- Select a model from the sidebar to start a new conversation
- Type your message and press Enter or click the send button
- Browse previous conversations in the sidebar