Pragmatic Developer Experience

Pragmatic Developer Experience

Articles

Code Reviews Are Slow Because Everything Else Is Broken

Code reviews aren't slow because developers are lazy. They're slow because they expose every dysfunction in how your team actually works.

Oct 14, 2025
∙ Paid

Every engineering team I’ve worked with struggles with the same code review problems. PRs sit idle for days. Developers get blocked waiting for approvals. Someone posts “Can anyone review my PR?” in Teams. The team discusses it in retro, sets review SLAs, mandates smaller PRs, rotates review responsibilities. Six months later, you’re having the exact same conversation.

Here’s what I’ve learned: code review problems are rarely about the review process itself. They’re symptoms of deeper issues in how your team works.

What Actually Happens When a PR Lands

Let’s be honest about the reality. You open a PR and see 47 files changed, 1,200 lines added. The description says “fixes issue #347” with no context. You don’t know what issue #347 is without opening three other tabs. The code touches a part of the system you’ve never worked on. There are no tests, or the tests exist but you can’t tell what they’re validating.

Now multiply this by five other PRs in your queue, your own feature branch getting stale, and the production incident from this morning that needs a postmortem.

The problem isn’t that you’re slow at reviewing code. The problem is that reviewing this code properly would take longer than writing it from scratch, and everyone knows it. So the PR sits there radiating guilt while you both pretend you’ll get to it “tomorrow”.

Why Standard Solutions Don’t Work

The common code review practices aren’t inherently bad, they’re just incomplete. They treat symptoms without addressing root causes.

Smaller PRs are genuinely helpful when your architecture supports incremental delivery. But if your monolith has tight coupling everywhere, breaking work into smaller chunks becomes artificial. Developers either create PRs that can’t be understood without the next three in the sequence, or they give up and submit large PRs anyway.

This community exchange captures the problem perfectly: breaking large PRs into stacks doesn’t eliminate complexity, it just moves it around.

Review SLAs can work when paired with quality metrics. The dysfunction happens when you measure speed without measuring depth. Teams start optimizing for “time to approval” and reviews become perfunctory rubber stamps. You’ve created an incentive for fast, shallow reviews instead of thorough ones.

Automated review assignment solves the diffusion of responsibility problem but creates a new one: context mismatch. You’re systematically assigning context-heavy work to people who lack context. This works with strong ownership models where automation routes PRs to the right domain owner, but fails with truly random assignment.

Mandatory approvals from two reviewers adds thoroughness but also coordination overhead. It can create a tragedy of the commons where everyone assumes someone else will do the deep review. This works well for critical paths like security or infrastructure changes, but becomes counterproductive when applied universally.

The pattern is clear: these practices fail when applied as process band-aids over systemic dysfunction. They succeed when they’re part of a broader system that includes clear ownership, shared context, and good architecture.

Code Review as a Diagnostic Tool

Here’s the uncomfortable truth: code review is a stress test for your engineering system. Every slow PR is a signal about deeper problems.

When PRs lack context, it reveals you don’t have a shared understanding of what you’re building or why. The product requirements were vague, the technical approach was never discussed, and now the reviewer is doing archaeology just to understand intent.

When PRs are too large to review, it reveals your architecture doesn’t support incremental delivery. You can’t ship smaller chunks because everything is coupled. The technical debt you’ve been deferring is now review debt.

When no one feels qualified to review, it reveals knowledge silos and single points of failure. You’ve got one person who understands authentication, another who owns the data pipeline, and they’re both overloaded. The review queue is just where this becomes visible.

When reviews turn into architecture debates, it reveals you don’t have clear technical direction. Every PR becomes an opportunity to renegotiate decisions that should have been made months ago, especially when the same patterns get questioned repeatedly.

When reviews focus on style over substance, it reveals that reviewers don’t have enough context to evaluate the actual logic, so they retreat to what they can evaluate: formatting, naming conventions, and other surface-level concerns that a linter should handle.

Code review isn’t slow because developers don’t care. It’s slow because it’s the moment where all the shortcuts you took earlier come due.

The Ownership Gap

Teams with clear ownership have faster code reviews. Not because they’ve optimized the review process, but because ownership solves the underlying problems.

When a developer owns a service or domain, they write code differently. They document decisions, maintain architectural consistency, and respond to reviews quickly because it’s their responsibility, not a favor. They review incoming changes thoroughly because they’ll be maintaining this code.

But ownership is tricky. It’s not about assigning people to components on a chart. It’s about giving developers the autonomy to make decisions, the responsibility to maintain their domain, and the accountability for outcomes. That’s a cultural shift, not a process change.

Netflix’s “freedom and responsibility” works because ownership is real. Developers ship changes to production with minimal gates because they own the consequences. Code review becomes about knowledge sharing and catching genuine issues, not permission seeking.

Contrast that with environments where ownership is performative. Someone is “responsible” for a service but needs approval from architecture review boards, security teams, and product managers to make any meaningful change. The code review becomes another checkpoint in a gauntlet of approvals. Of course it’s slow.

The Context Problem

Let me tell you about the most frustrating code review I ever had to do. A PR touched a critical system component. The code looked fine—clean implementation, good tests. But I had no idea if the approach was right because I didn’t know what problem we were solving.

I found the related ticket with a brief description and no context. Nothing about why this mattered, what constraints we were working under, or what trade-offs had been considered. I pinged the author on Teams. They pointed me to a Confluence page from a meeting I wasn’t in that assumed knowledge of a discussion I’d never heard about.

I spent two hours reconstructing context just to give a meaningful review. The author had spent 30 minutes writing the code.

This is the context tax, and it’s killing your review velocity. Every reviewer who lacks context either spends hours on archaeology or does the worse thing: approves without understanding.

The solutions that work aren’t about the review process. They’re about information architecture. Link PRs to rich context, not just ticket numbers. Write PR descriptions that explain the problem, the approach, and the trade-offs. Assume the reviewer knows nothing.

Discuss approaches before implementation. A 15-minute conversation before coding can eliminate hours of review back-and-forth. This is what design docs and RFCs are for, but they only work if they’re lightweight and living documents, not bureaucratic artifacts.

Record decisions where the code lives. Architecture Decision Records aren’t overhead if they prevent you from having the same debate six times across different PRs. Make tribal knowledge discoverable. If understanding this PR requires knowing about the customer escalation from Q3 2024, write that down somewhere that isn’t someone’s head.

The Interrupt-Driven Review Problem

Here’s a pattern I see often: developers treat code review as an interrupt-driven task. A notification comes in, you context-switch from your work, skim the PR for 5 minutes, leave a comment, then try to get back to what you were doing. Your flow state is destroyed, and the review is mediocre because you never built a mental model of what the code is doing.

This is the worst of both worlds. The reviewer loses productivity to context-switching. The author gets superficial feedback that misses real issues but catches nitpicks. Then in production, you discover the actual problem that no one caught because reviews were done in interrupt-driven bursts.

The alternative isn’t to batch reviews once a week—that’s too slow. It’s to treat code review as real work that deserves dedicated time and focus.

Some teams have “review hours” where the expectation is that you’re doing reviews, not writing code. No meetings scheduled, chat on pause, full attention on understanding what someone built. The reviews are higher quality, they happen faster, and developers can actually schedule around them.

Others use pair or mob programming to eliminate the review bottleneck entirely. The review happens continuously as you’re writing the code. This works brilliantly for some teams and feels painfully slow for others. The point isn’t that everyone should pair program. The point is that you need to match your review process to how your team actually works.

When Reviews Become Performance Theater

Let’s talk about what happens when code review becomes about looking busy rather than being useful.

You’ve probably seen PRs with dozens of comments about formatting, naming, and other minutiae, while no one questions whether the approach is sound. Reviews that take days but offer no substantive feedback. Approvals given without actually reading the code because everyone knows the author will clean it up anyway.

This happens when review metrics become targets. When “number of comments” or “time to approval” get tracked, people game them. You get what you measure, and you’ve measured the wrong things.

The better question is: are we catching issues before production? Are developers learning from reviews? Is code quality improving over time? Those are harder to measure, so teams fall back to vanity metrics that drive counterproductive behavior.

I’ve also seen the opposite problem: reviews that become nitpick festivals. Every PR gets 30 comments about variable names and formatting. The author feels demoralized. The reviewer feels righteous. The code barely improves. Everyone is frustrated.

This usually happens when developers don’t feel they have more meaningful ways to contribute. If you can’t influence architecture or push back on product decisions, you can at least enforce the style guide. It’s busywork disguised as quality assurance.

The fix isn’t better tools or stricter guidelines. It’s giving developers real agency over their work and clear standards for what actually matters in a review.

The Async Communication Challenge

Code review is fundamentally async communication, and it exposes a truth most teams don’t want to face: we want the benefits of async—flexibility, time to think, no meeting overhead—without paying the upfront cost of making async work well.

Author submits a PR. Reviewer comments the next day. Author responds, but now the reviewer is in meetings all afternoon. They respond the following morning. Author makes changes, but now the reviewer is working on something else and doesn’t see the update for another day. What should have been a 30-minute conversation stretched over four days with maybe two hours of actual work.

Compare this to a synchronous code review where author and reviewer jump on a call, walk through the changes in 15 minutes, make decisions in real-time, and finish. This is faster, but it doesn’t scale. You can’t interrupt people for synchronous reviews every time.

The teams that do async reviews well have figured out how to front-load context and reduce round-trips. They write detailed PR descriptions. They proactively address obvious questions. They use inline comments to explain non-obvious decisions. They treat the PR description like a mini design doc.

They’ve also figured out when to go synchronous. If a PR has more than two rounds of back-and-forth in comments, jump on a call. That’s the signal that async isn’t working for this particular review.

A Framework for Actually Fixing This

Here’s how to diagnose whether your code review problems are process issues or symptoms of deeper dysfunction.

Keep reading with a 7-day free trial

Subscribe to Pragmatic Developer Experience to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Pragmatic Developer Experience · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture