8000 GitHub - aygp-dr/claude-code-metrics-lab: OpenTelemetry-based metrics tracking and analysis for Claude Code usage patterns, costs, and efficiency
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

OpenTelemetry-based metrics tracking and analysis for Claude Code usage patterns, costs, and efficiency

Notifications You must be signed in to change notification settings

aygp-dr/claude-code-metrics-lab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

24 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Claude Code Metrics Lab

Overview

A pedagogical lab for learning telemetry implementation through practical Claude Code metrics tracking. This project emphasizes understanding over quick setup, providing workshops, exercises, and real-world examples.

Features

  • Real-time metrics collection via OpenTelemetry
  • Custom Grafana dashboards with cost tracking
  • Pedagogical debugging tools (OTLP interceptor, debug sink)
  • Comprehensive test matrix for all configurations
  • Cost analysis and model comparison
  • Workshop materials for teaching telemetry concepts

Quick Start

  1. Copy .env.sample to .env and configure your environment
  2. Start OTLP infrastructure (Prometheus, Grafana, OTLP Collector)
  3. Use make help to see available commands
  4. Try make otlp-debug-sink to capture raw telemetry data

Real-World Metrics Example

The following dashboard screenshot shows actual Claude Code telemetry data captured during a 24-hour experiment:

docs/grafana-dashboard-example.png

Key Observations:

  • Total Cost: $42.23 across 14 million tokens
  • Model Distribution: Opus 4 dominated usage (11M tokens, $40.64)
  • Cost Variance: 172x difference between Haiku ($0.24) and Opus 4 ($40.64)
  • Single Session: All activity from one intensive session

This demonstrates the importance of telemetry for understanding AI operational costs. These metrics were collected via OpenTelemetry and visualized in Grafana.

Learning Resources

πŸ“š Tutorials and Guides

πŸ› οΈ Workshops and Exercises

πŸ“Š Configuration and Examples

Available Tools

Telemetry Debugging

# Capture raw OTLP requests without forwarding
make otlp-debug-sink

# Intercept and forward OTLP requests
make otlp-interceptor

# Verbose interceptor with real-time analysis
make otlp-interceptor-verbose

Metrics Simulation

# Start metrics simulator (Brownian motion model)
make simulate

# Test with specific scenarios
make simulate-scenario SCENARIO=high_load

# Run simulator tests
make test-simulator

Analysis and Dashboards

# Run cost and usage analysis
make analyze

# Generate Grafana dashboards
make dashboards
make dashboards-dev
make dashboards-prod

Development Tools

# Code quality checks
make lint
make format

# Clean generated files
make clean

# Show all commands
make help

Dashboard Architecture

graph TB
    subgraph "Claude Code Metrics Enhanced Dashboard"
        subgraph "Row 1 - Key Metrics (4 units high)"
            S1[Total Sessions<br/>Counter: session_count_total]
            S2[Total Tokens Used<br/>Counter: token_usage_tokens_total]
            S3[Total Cost USD<br/>Counter: cost_usage_USD_total]
            S4[Total Commits<br/>Counter: commit_count_total]
        end
        
        subgraph "Row 2 - Usage Trends (8 units high)"
            TS1[Token Usage Rate by Type<br/>Time Series<br/>Grouped by: type]
            TS2[Cost Rate by Model<br/>Time Series<br/>Grouped by: model]
        end
        
        subgraph "Row 3 - Model Breakdown (8 units high)"
            T1[Usage by Model<br/>Table View<br/>Shows: Tokens & Cost per model]
        end
        
        subgraph "Row 4 - Activity Analysis (8 units high)"
            TS3[Hourly Token Usage<br/>Stacked Bar Chart<br/>Grouped by: model]
            TS4[Activity Rate<br/>Time Series<br/>Sessions/sec & Commits/sec]
        end
    end
    
    style S1 fill:#2d4a2b,stroke:#73bf69,color:#fff
    style S2 fill:#4a4a2b,stroke:#f2cc0c,color:#fff
    style S3 fill:#4a4a2b,stroke:#f2cc0c,color:#fff
    style S4 fill:#2d4a2b,stroke:#73bf69,color:#fff

Loading

Project Structure

claude-code-metrics-lab/
β”œβ”€β”€ .env.sample              # Environment configuration template
β”œβ”€β”€ Makefile                 # All available commands (run 'make help')
β”œβ”€β”€ README.org               # This file
β”œβ”€β”€ CLAUDE.org              # Claude-specific configuration
β”‚
β”œβ”€β”€ docs/                    # Documentation
β”‚   β”œβ”€β”€ grafana-dashboard-example.png
β”‚   β”œβ”€β”€ telemetry-contracts.org
β”‚   └── github-issue-*.md    # Issue templates
β”‚
β”œβ”€β”€ src/                     # Analysis scripts
β”‚   β”œβ”€β”€ project_metrics.py
β”‚   β”œβ”€β”€ cost_analyzer.py
β”‚   └── session_analyzer.py
β”‚
β”œβ”€β”€ scripts/                 # Utility scripts
β”‚   β”œβ”€β”€ claude-metrics-simulator.py
β”‚   β”œβ”€β”€ generate_dashboards.py
β”‚   └── otlp-http-interceptor.sh
β”‚
β”œβ”€β”€ dashboards/              # Grafana dashboard JSON files
β”œβ”€β”€ exports/                 # Analysis output directory
└── test_results/            # Test matrix results

Requirements

  • Python 3.8+ with uv package manager
  • OpenTelemetry Collector (OTLP)
  • Prometheus for metrics storage
  • Grafana for visualization
  • Claude API access with telemetry enabled
  • netcat (nc) for debugging tools
  • Optional: socat for advanced HTTP interception

Contributing

This lab is designed for experimentation and learning. Contributions welcome:

  1. Document new telemetry patterns
  2. Add workshop exercises
  3. Share dashboard improvements
  4. Report cost anomalies

References

About

OpenTelemetry-based metrics tracking and analysis for Claude Code usage patterns, costs, and efficiency

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  
0