Welcome to Ollama Drama, a hands-on workshop where you'll learn to run, prompt, and customize LLMs locally using Ollama.
By the end of this workshop, you will:
- Run and interact with language models locally using the
ollama
CLI - Use Python (
chatbot.py
) to call Ollama’s API - Create a custom model using a
Modelfile
- Pass tests using pytest
- Solve programming challenges and submit them via Pull Request
Make sure you have the following installed before the workshop:
- Python 3.10+
- Ollama (with at least one model pulled locally, like
qwen3:0.6b
) - Git
- A GitHub account
-
Fork this repository to your own GitHub account.
-
Clone your fork to your local machine:
git clone https://github.com/YOUR_USERNAME/ollama-drama.git cd ollama-drama
-
Install dependencies:
python3 -m venv .venv source ./venv/bin/activate pip install -r requirements.txt
-
Execute Preflight Checks:
bash ./preflight.sh
Each problem lives in a subfolder like:
problems/question1/
├── prompt.md # Problem description
├── expected_output.txt # Hints or expected structure
└── student_code.py # This is what you edit
To test your solution locally:
pytest -k test_problem1
-
Pull a model:
ollama pull qwen3:0.6b
-
Run a prompt:
ollama run qwen3:0.6b "What is the capital of Peru?"
-
Summarize a file:
ollama run qwen3:0.6b "Summarize this file: $(cat README.md)"
If you get stuck:
- Check your local
ollama
service: http://localhost:11434 - Ask your workshop host or teammates!
- Or open a GitHub Issue if it's repo-related.
We'll add extra challenges if you complete the first one early, including:
- Publishing your custom model to Ollama.com
Enjoy the chaos. Herd your llamas.
– The Ollama Drama Team