LogWhisperer is a self-hosted, AI-powered log summarizer for Linux systems. It uses local LLMs (via Ollama) like Mistral or Phi to analyze system logs and generate concise summaries — entirely offline, with zero cloud dependency.
- Summarizes logs using a local LLM
- Supports
journalctl
, raw log files, and Docker container logs - Automatically pulls missing models via Ollama
--follow
mode: live log monitoring & auto-summarization- Saves human-readable summaries in Markdown format
- Prompt customization via
config.yaml
- Works fully offline after initial model pull
- One-line installer with virtualenv, dependency setup, and Ollama integration
- Configurable model, host, and timeout — ideal for air-gapped or private deployments
git clone https://github.com/binary-knight/logwhisperer
cd logwhisperer
chmod +x install_logwhisperer.sh
./install_logwhisperer.sh
This sets up a virtual environment, installs dependencies, sets up Ollama, and pulls your selected model (if not already installed).
python3 logwhisperer.py
python3 logwhisperer.py --source file --logfile /var/log/syslog
python3 logwhisperer.py --source docker --container my_app_container
python3 logwhisperer.py --follow --interval 60
LogWhisperer supports a fully configurable YAML file:
# Available models from Ollama:
# - mistral
# - llama2
# - codellama
# - phi
# - gemma
# - tinyllama
# - dolphin-mixtral
# - or your own pulled/custom model
model: mistral
source: journalctl # or "file", "docker"
log_file_path: /var/log/syslog # only used if source == file
priority: err # journalctl log level: emerg, alert, crit, err, warning, etc.
entries: 500 # number of log entries to fetch
timeout: 90 # number of seconds before LLM timeout
docker_container: my_container # container name or ID
ollama_host: http://localhost:11434 # change if using remote Ollama
prompt: |
You are a helpful Linux operations assistant. Analyze the following logs:
- Identify root causes
- Summarize key issues
- Recommend next steps
LOGS:
{{LOGS}}
The model defined here is used by default and will be pulled automatically if not found.
Customize how logs are interpreted using the prompt:
field in config.yaml
.
Use {{LOGS}}
as a placeholder for the actual log content injected at runtime.
Flag | Description |
---|---|
--source |
journalctl , file , or docker |
--logfile |
Path to a file (if source is file ) |
--container |
Docker container name or ID (if source is docker ) |
--entries |
Number of log entries to analyze |
--model |
Override model name (else uses config.yaml ) |
--ollama-host |
Override Ollama API URL |
--timeout |
Request timeout (in seconds) |
--follow |
Enable continuous summarization mode |
--interval |
Seconds between summaries (used with --follow ) |
--list-models |
Show models currently installed in Ollama |
--version |
Show version number and exit |
Summaries are saved in the reports/
directory with timestamped .md
filenames:
reports/log_summary_2025-05-08_14-15-01.md
- Missing virtualenv or pip issues?
sudo apt install python3-venv
rm -rf venv
./install_logwhisperer.sh
-
On WSL?
journalctl
won’t work — usesource: file
in your config or pass--source file
. -
Model not found?
LogWhisperer now auto-pulls missing models. If that fails (e.g., offline), you can pull manually:
ollama pull phi
- Output format options:
json
,txt
,md
- Scheduled summaries via
cron
orsystemd
- Pattern matching and alert triggers
- PyPI packaging (
pip install logwhisperer
) - Fully air-gapped installer with pre-bundled models
This project was developed independently by the author on personal time, using personal equipment, and without the use of employer-owned resources or intellectual property.
It is not affiliated with or endorsed by any current or former employer.
All rights to this software are retained by the author under the terms specified in the LICENSE file.
If LogWhisperer saves you time or headaches, consider:
Pull requests, feedback, and bug reports are welcome.
If you’re using LogWhisperer in production or have ideas, feel free to open an issue or submit a PR.