- Ubuntu 22.04
- Python 3.10
- Ollama 0.5.7
Install Python requirements:
$ pip install -r requirements.txt
Install and start Ollama local LLM service:
# start ollama service:
$ sudo systemctl start ollama
# start serving model "Llama 3.2 - 3B"
# for other Ollama models: https://ollama.com/library
$ ollama run llama3.2:3b
# preload a model into Ollama to get faster response times:
$ curl http://localhost:11434/api/generate -d '{"model": "llama3.2:3b"}'
# check ollama log
$ journalctl -u ollama
If you do not have root permission, install and start Ollama manually:
# Download and extract the package:
$ curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz
$ sudo tar -C <your-path>/ -xzf ollama-linux-amd64.tgz
# Create user-mode systemd.service file `~/.config/systemd/user/ollama.service` with the following content:
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=<your-path>/bin/ollama serve
Restart=always
RestartSec=3
Environment="PATH=$PATH"
[Install]
WantedBy=default.target
# Start systemd ollama service in user mode:
$ systemctl --user enable ollama
$ systemctl --user start ollama
# Check ollama service log:
$ journalctl --user -f -u ollama
Install NLTK Data
$ python -m nltk.downloader all
The following Ollama models are utilized in our experiments.
The following Hugging Face datasets are utilized in our experiments.
Use
HF_HUB_CACHE
to configure where repositories from the Hub will be cached locally (models, datasets and spaces).
$ python simulator.py