The old loop
Traditional software delivery follows a predictable pattern:
- Product defines a requirement
- A developer picks it up, reads the spec, explores the codebase
- They write code, test it, iterate
- They create a PR, wait for review, address feedback
- It merges and ships
Steps 2-4 typically take days. The developer carries the full cognitive load — understanding the domain, navigating the codebase, writing the implementation, debugging edge cases.
The AI-first loop
With an AI coding agent, the loop compresses:
- Product defines a requirement
- A developer picks it up (
/morning) - They describe the intent to the AI agent, review and steer the implementation
- The agent writes code, the developer validates
- They finish (
/finish), create a PR, ship
The developer’s role shifts from implementer to director. They still need to understand the domain and make architectural decisions, but the mechanical work of writing boilerplate, looking up APIs, and structuring files is handled by the agent.
What this changes
Cycle times drop dramatically
When the bottleneck was typing and debugging, a task might take 2-3 days. With an AI agent handling implementation, the same task can complete in hours. Tandemu measures this automatically — the time between /morning and /finish is the real cycle time, with no estimation or self-reporting.
The ratio of thinking to typing inverts
In traditional development, maybe 30% of time is spent thinking about what to build and 70% is spent implementing it. With AI, this inverts. Developers spend more time on intent, architecture, and review — and less on the mechanics of writing code.
This is a good thing. The work that remains with the developer is the high-judgment work that AI can’t do well: understanding user needs, making tradeoff decisions, catching subtle bugs in generated code.
Code volume increases, so quality signals matter more
AI agents produce code fast. A team might ship 3x more code per week. But volume alone doesn’t tell you if the code is good. That’s why Tandemu tracks friction — prompt loops where the developer repeatedly asks the AI to fix the same issue indicate either a complex problem or poor code quality. High friction on a specific file is a signal worth investigating.
”How much AI?” becomes a real question
Engineering leadership wants to know whether the team is actually leveraging AI or just using it as a fancy autocomplete. Tandemu answers this by measuring the AI-to-manual code ratio at the commit level. A team with a 20% AI ratio might need better tooling or training. A team at 90% might need to slow down and review more carefully.
The developer’s day
A typical day with Tandemu looks like:
| Time | Activity |
|---|---|
| 9:00 | /morning — pick the highest-priority task assigned to you |
| 9:05 | Read the task description, orient in the codebase |
| 9:15 | Start working with Claude Code — describe what you need, iterate |
| 10:30 | Code is working, tests pass |
| 10:45 | /finish — commit, create PR, telemetry is captured |
| 10:50 | /morning — pick the next task |
| … | Repeat |
Multiple tasks per day is normal. Each one has a measured cycle time, a measured AI ratio, and a clear record of what changed.
What this is not
This is not about replacing developers with AI. The AI agent can’t:
- Decide what to build
- Understand user pain points
- Make architectural tradeoffs
- Catch business logic errors in generated code
- Navigate organizational politics to get a feature shipped
The developer’s role is more important than ever — it’s just different. Tandemu is built around this new role.