Skip to Content

What’s Collected

Tandemu uses the OpenTelemetry standard to collect three types of data from Claude Code sessions:

TypeExamples
TracesSession start/end, tool executions, deployment events
MetricsAI-generated lines, manual lines, token consumption
LogsPrompt loops, API errors, friction events

Privacy

Tandemu is designed for stealth observability — it tracks session-level metrics, not individual actions:

What IS tracked:

  • How long a session lasted
  • How many lines of code were AI-generated vs manually typed
  • Which files had repeated errors (friction)
  • Deployment frequency and failure rates

What is NOT tracked:

  • Individual keystrokes
  • Screen recordings
  • Prompt content (what the developer asked Claude)
  • Code content (what was written)
  • Idle time or break tracking

Pipeline

Developer's Terminal Claude Code (emits OpenTelemetry data) OTel Collector (port 4317/4318) ├── Validates and batches data └── Tags with organization ID ClickHouse (analytical database) ├── otel_traces — session and deployment data ├── otel_metrics_sum — code line counts └── otel_logs — friction events NestJS Backend (queries ClickHouse) Dashboard / Claude Code Skills

Metrics Explained

AI vs Manual Ratio

Measures the proportion of code generated by Claude Code versus code typed by the developer.

  • Tracked via the code.lines.ai_generated and code.lines.manual metrics
  • Calculated per session and aggregated by team, sprint, or time period
  • A ratio of 2.5x means 2.5 lines of AI code for every 1 line of manual code

Friction Heatmap

Identifies code areas where developers struggle repeatedly.

Friction is detected by:

  • Prompt loops — The developer asks Claude to fix the same issue multiple times
  • Tool execution errors — Claude’s file edits or commands fail repeatedly

Each friction event is tagged with the file path, so Tandemu can show which files cause the most trouble.

Severity levels:

  • Critical (red) — 5+ prompt loops or 3+ errors
  • Warning (yellow) — 2-4 prompt loops
  • Info (green) — 1 prompt loop

DORA Metrics

The four DORA metrics are inferred from telemetry:

  • Deployment Frequency — Counted from deployment trace events
  • Lead Time for Changes — Duration between first commit and deployment
  • Change Failure Rate — Percentage of deployments that result in failures
  • Time to Restore — Duration of incident resolution traces

Passive Time Tracking

Session duration is calculated from trace start/end timestamps:

  • A “session” starts when a developer begins a Claude Code session
  • It ends when they close it or after a period of inactivity
  • Total hours are aggregated per developer per day
  • AI-assisted hours are sessions where AI tools were actively used

Data Retention

By default, telemetry data in ClickHouse has a 90-day TTL. Older data is automatically purged. This can be configured in the ClickHouse table definitions.

Last updated on