Skip to Content
DocumentationMethodologyOverview

Software teams have spent decades organizing work around sprints, story points, and standup meetings. These rituals made sense when the bottleneck was coordination between humans writing code. That bottleneck has shifted.

With AI coding agents handling implementation, the limiting factor is no longer typing speed or individual productivity — it’s how fast a team can move from intent to shipped code while maintaining quality. Tandemu introduces a methodology designed for this reality.

The core idea

A developer picks a task. They work on it with an AI agent. When it’s done, they mark it finished. Everything in between — time spent, code written, friction encountered — is measured automatically.

No standups to prepare. No timesheets to fill. No story points to estimate. The work itself generates the signal.

/morning → pick a task → work with AI → /finish → metrics are captured

This is not a replacement for how your team organizes work. It’s a layer on top that captures what actually happened, regardless of whether you use Scrum, Kanban, or no framework at all.

Principles

One task at a time

A developer has exactly one active task. Starting a new task requires finishing or pausing the current one. This isn’t about rigid process — it’s about measurement. When a task has a clear start and end, you get accurate cycle times, accurate attribution, and a clear record of what was delivered.

The task is the unit of delivery

Traditional frameworks measure velocity in story points or sprint completions. Tandemu measures at the task level. Each completed task is a unit of delivered work — with a known duration, a known set of code changes, and a known ratio of AI-generated vs manually-written lines.

This means “deployment frequency” in DORA terms equals the rate at which your team finishes tasks. Lead time is the wall-clock time from /morning to /finish. No CI/CD integration needed for the baseline metrics.

AI attribution is built in

Every commit made through Claude Code carries a Co-Authored-By: Claude tag. When a task is finished, Tandemu diffs the branch, identifies which commits were AI-assisted, and calculates the ratio. This gives engineering leads a real, measured answer to “how much of our code is AI-generated?” — not a guess.

Observability without surveillance

Tandemu captures session duration, code metrics, and friction events from Claude Code’s native telemetry. It does not record keystrokes, screen activity, or idle time. The data flows through standard OpenTelemetry, so it’s auditable and transparent.

Developers see the same data their leads see. There’s no hidden dashboard.

In this section

Last updated on