A pedagogical lab for learning telemetry implementation through practical Claude Code metrics tracking. This project emphasizes understanding over quick setup, providing workshops, exercises, and real-world examples.
- Real-time metrics collection via OpenTelemetry
- Custom Grafana dashboards with cost tracking
- Pedagogical debugging tools (OTLP interceptor, debug sink)
- Comprehensive test matrix for all configurations
- Cost analysis and model comparison
- Workshop materials for teaching telemetry concepts
- Copy
.env.sample
to.env
and configure your environment - Start OTLP infrastructure (Prometheus, Grafana, OTLP Collector)
- Use
make help
to see available commands - Try
make otlp-debug-sink
to capture raw telemetry data
The following dashboard screenshot shows actual Claude Code telemetry data captured during a 24-hour experiment:
Key Observations:
- Total Cost: $42.23 across 14 million tokens
- Model Distribution: Opus 4 dominated usage (11M tokens, $40.64)
- Cost Variance: 172x difference between Haiku ($0.24) and Opus 4 ($40.64)
- Single Session: All activity from one intensive session
This demonstrates the importance of telemetry for understanding AI operational costs. These metrics were collected via OpenTelemetry and visualized in Grafana.
- Pedagogical Approach to Telemetry - Start here for the educational philosophy
- Phase 1 Experiment Results - Real-world cost analysis and lessons learned
- Comprehensive Technical Guide - Deep dive into all components
- Telemetry Contracts & APIs - Detailed specifications for debugging
- HTTP Interceptor Workshop - Build telemetry interceptors from scratch
- Netcat OTLP Exercises - Learn OTLP protocol with simple tools
- Dashboard Generator Tutorial - Create custom Grafana dashboards
- Dashboard Configuration - Pre-built dashboard examples
- Environment Configuration - Complete configuration reference
- Test Matrix - 54 test scenarios for telemetry configurations
# Capture raw OTLP requests without forwarding
make otlp-debug-sink
# Intercept and forward OTLP requests
make otlp-interceptor
# Verbose interceptor with real-time analysis
make otlp-interceptor-verbose
# Start metrics simulator (Brownian motion model)
make simulate
# Test with specific scenarios
make simulate-scenario SCENARIO=high_load
# Run simulator tests
make test-simulator
# Run cost and usage analysis
make analyze
# Generate Grafana dashboards
make dashboards
make dashboards-dev
make dashboards-prod
# Code quality checks
make lint
make format
# Clean generated files
make clean
# Show all commands
make help
graph TB
subgraph "Claude Code Metrics Enhanced Dashboard"
subgraph "Row 1 - Key Metrics (4 units high)"
S1[Total Sessions<br/>Counter: session_count_total]
S2[Total Tokens Used<br/>Counter: token_usage_tokens_total]
S3[Total Cost USD<br/>Counter: cost_usage_USD_total]
S4[Total Commits<br/>Counter: commit_count_total]
end
subgraph "Row 2 - Usage Trends (8 units high)"
TS1[Token Usage Rate by Type<br/>Time Series<br/>Grouped by: type]
TS2[Cost Rate by Model<br/>Time Series<br/>Grouped by: model]
end
subgraph "Row 3 - Model Breakdown (8 units high)"
T1[Usage by Model<br/>Table View<br/>Shows: Tokens & Cost per model]
end
subgraph "Row 4 - Activity Analysis (8 units high)"
TS3[Hourly Token Usage<br/>Stacked Bar Chart<br/>Grouped by: model]
TS4[Activity Rate<br/>Time Series<br/>Sessions/sec & Commits/sec]
end
end
style S1 fill:#2d4a2b,stroke:#73bf69,color:#fff
style S2 fill:#4a4a2b,stroke:#f2cc0c,color:#fff
style S3 fill:#4a4a2b,stroke:#f2cc0c,color:#fff
style S4 fill:#2d4a2b,stroke:#73bf69,color:#fff
claude-code-metrics-lab/ βββ .env.sample # Environment configuration template βββ Makefile # All available commands (run 'make help') βββ README.org # This file βββ CLAUDE.org # Claude-specific configuration β βββ docs/ # Documentation β βββ grafana-dashboard-example.png β βββ telemetry-contracts.org β βββ github-issue-*.md # Issue templates β βββ src/ # Analysis scripts β βββ project_metrics.py β βββ cost_analyzer.py β βββ session_analyzer.py β βββ scripts/ # Utility scripts β βββ claude-metrics-simulator.py β βββ generate_dashboards.py β βββ otlp-http-interceptor.sh β βββ dashboards/ # Grafana dashboard JSON files βββ exports/ # Analysis output directory βββ test_results/ # Test matrix results
- Python 3.8+ with uv package manager
- OpenTelemetry Collector (OTLP)
- Prometheus for metrics storage
- Grafana for visualization
- Claude API access with telemetry enabled
- netcat (nc) for debugging tools
- Optional: socat for advanced HTTP interception
This lab is designed for experimentation and learning. Contributions welcome:
- Document new telemetry patterns
- Add workshop exercises
- Share dashboard improvements
- Report cost anomalies