For 50 years, software engineering was built around one constraint: humans write code. Every tool we adopted — better languages, better frameworks, better IDEs — optimized around that bottleneck.
That constraint is gone.
AI didn't make coding faster. It made coding a commodity. And when a fundamental constraint disappears, everything built around it has to be rethought.
This memo captures the framework from our team discussion. It's meant to be a reference — something to come back to as we figure out how to work in this new dimension.
The shift isn't "do the same things faster." It's that things that didn't make sense before now make sense:
Things that didn't justify the business case — now they do. Things you'd never staff — now they're free. Tools you'd never build — now they take days instead of months.
This is a new dimension of capability. Not a faster horse. A car.
Manual — You write the code. AI helps with research, questions, and boilerplate. Copy-paste from ChatGPT.
AI-Assisted — AI writes code in your project via tools like Cursor, Copilot, or Claude Code. You review every change. Faster, but you're still the bottleneck. This is where most teams are today.
Compound — You stop reviewing every output and start building a system that improves itself. Rules encode non-negotiable behaviors. Skills package repeatable workflows. Memory persists what the system learns. Verification hooks catch quality issues before you see them. Each unit of work makes the next one easier. This is the unlock.
Autonomous — Defined workflows run without you. Automated verification catches issues with zero human involvement. Deployments, monitoring, testing fire automatically. The system self-monitors and self-corrects. This is the frontier.
This isn't a maturity model. It's a map. No judgment — just position. The industry is moving right. The question is how quickly we move with it.
Most teams adopted AI to move faster. Then they added a human review layer on top of every AI output. They didn't eliminate work — they moved the bottleneck.
writing code → reviewing code someone else wrote
Except that "someone" is tireless and produces at machine speed. You can't keep up with a review-everything approach. The math doesn't work.
The answer isn't "stop reviewing." The answer is stop reviewing everything and start building systems that verify for you.
Three things. The AI handles everything else.
01 DIRECTION
What to build and why. Strategy, tradeoffs, priorities. The AI can't decide your product strategy or choose your architecture for your specific context. This is where taste matters.
02 VERIFICATION DESIGN
Not reviewing code. Designing the systems that verify code. Rules, hooks, automated tests, acceptance criteria. Design the verification system — don't be the verification system.
03 SYSTEM IMPROVEMENT
When something breaks, don't fix the output and move on. Fix the system so that category of problem never happens again. Add a rule. Update a skill. Write a hook. The goal: never make the same manual correction twice.
The real unlock isn't speed. It's compounding.
Every bug fix becomes a rule. Every pattern becomes a skill. Every preference becomes memory. Each unit of work makes the next one easier.
The practice: spend half your time shipping, half improving the system. It feels wrong at first — feels like you're wasting time on infrastructure instead of delivering. But by week three, the compound effect kicks in. Tasks that took 20 minutes take 2. Categories of bugs disappear entirely. The AI starts producing output that matches your standards on the first try because you taught the system what your standards are.
Three tools built solo in the last few months. All three exist only because AI changed what's possible.
altitude + horizon
Custom architecture analysis tools. Map every module, track dependencies, surface invariants, measure blast radius. Before AI: a multi-person team project spanning months. With AI: days. Would never have been built otherwise.
roundtable reviews
15 specialized AI agents — security, performance, accessibility, QA, domain expert — collaborating on every feature review. They share findings, cross-reference concerns, and gate the work. No company would assign 15 reviewers to every feature. Now it's part of every commit.
pods
A desktop app for orchestrating multiple AI engineers working in parallel on the same codebase. Git worktree isolation, session management, diff review. A tool that has no reason to exist in a pre-AI world.
The agentic AI engineer is not a junior developer that needs babysitting. It's a counterpart — an impact magnifier.
The difference between micromanaging and leading:
Micromanaging: "Write me a function that validates email addresses."
Leading: "Here's what we need. Here are the constraints. Here's how we'll verify it. Build it."
Provide the what, why, and high-level how. Let the AI handle the implementation. When it gets it wrong, that's a signal to improve your system — not to micromanage harder.
Think of it as a promotion. You're the engineering manager now. Set direction, design verification, review where it matters. The code generation? That's your team's job.
This isn't experimental. It's happening at scale.
Spotify — Their best engineers haven't written a line of code since December. They direct AI agents from their phones via Slack. They review and merge before arriving at the office. They shipped 50+ features this way across 2025.
StrongDM — $1,000/day/engineer on AI tokens. Company rule: "Code must not be written by humans. Code must not be reviewed by humans." Their AI validates code against behavioral clones of Okta, Jira, Slack, and Google Docs.
GitHub — Claude Code accounts for 4% of all public commits. Expected to reach 20% by year-end. One in five commits on the planet.
Pick one workflow you repeat more than once a week. Encode it — the rules, the process, the acceptance criteria. Teach the system to do it. Stop doing it manually. Even if it's small. The point is to start building the muscle.
Find something that wasn't possible before AI. Not "do something faster." Something new. Something in the new dimension. Build it.
Bring both back next week. We'll share what everyone found.
My example: a personalized storybook factory for my daughter. A few words about her day become a new chapter in an ongoing story, read aloud at bedtime. Every night. Personalized. Compounding. Not "write stories faster" — something that didn't exist before. That's the new dimension.