<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Pragmatic Developer Experience: Articles]]></title><description><![CDATA[Deep dives into the tools, practices, and principles that make developers more productive and satisfied.]]></description><link>https://blog.pragmaticdx.com/s/articles</link><generator>Substack</generator><lastBuildDate>Tue, 07 Apr 2026 20:59:40 GMT</lastBuildDate><atom:link href="https://blog.pragmaticdx.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Pragmatic Developer Experience]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[hello@pragmaticdx.com]]></webMaster><itunes:owner><itunes:email><![CDATA[hello@pragmaticdx.com]]></itunes:email><itunes:name><![CDATA[Marcel Hauri]]></itunes:name></itunes:owner><itunes:author><![CDATA[Marcel Hauri]]></itunes:author><googleplay:owner><![CDATA[hello@pragmaticdx.com]]></googleplay:owner><googleplay:email><![CDATA[hello@pragmaticdx.com]]></googleplay:email><googleplay:author><![CDATA[Marcel Hauri]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Christmas Gift Your Developers Actually Want: Being Heard]]></title><description><![CDATA[Developer friction is rising while leaders double down on AI and metrics. The only way to fix what&#8217;s actually broken is to ask the people doing the work.]]></description><link>https://blog.pragmaticdx.com/p/the-christmas-gift-your-developers</link><guid isPermaLink="false">https://blog.pragmaticdx.com/p/the-christmas-gift-your-developers</guid><dc:creator><![CDATA[Marcel Hauri]]></dc:creator><pubDate>Tue, 09 Dec 2025 11:00:53 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/60913464-3fc0-4288-ac94-4661cce453b6_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="preformatted-block" data-component-name="PreformattedTextBlockToDOM"><label class="hide-text" contenteditable="false">Text within this block will maintain its original spacing when published</label><pre class="text"><em>I owe you an apology for skipping last Tuesday. Time was tight, and rather than forcing out something rushed, I chose to wait until I could sit down, think clearly, and write something that actually adds value.</em></pre></div><div><hr></div><p>It&#8217;s that time of year when we make time for the people who matter most. We schedule dinners with family we haven&#8217;t seen in months, we catch up with old friends over drinks, we actually show up to those holiday parties. There&#8217;s something about December that makes us pause the usual grind and invest in relationships.</p><p>So here&#8217;s a question: when was the last time you sat down with your developers, not for a sprint review, not for a stand-up, but to actually understand how they work?</p><p>If you&#8217;re like most engineering leaders, the answer is probably uncomfortable. <a href="https://www.atlassian.com/blog/developer/developer-experience-report-2025">Research from Atlassian</a> shows that 63% of developers now say their leaders don&#8217;t understand their pain points, up sharply from 44% just a year ago. That&#8217;s not a marginal drift, that&#8217;s a crisis of disconnection happening in real time.</p><p>And the cost? Developers are losing an entire day each week to inefficiencies that most leaders don&#8217;t even know exist.</p><h2>Why the listening tour beats another AI tool</h2><p>The irony is rich. While leaders are betting big on AI to boost productivity, they&#8217;re missing the fundamental step of understanding where the actual friction lives. Developers aren&#8217;t asking for more tools, they&#8217;re asking to be heard.</p><p><a href="https://www.linkedin.com/in/nicolefv/">Nicole Forsgren</a> and <a href="https://www.linkedin.com/in/abinoda/">Abi Noda</a> make this clear in their new book <a href="https://amzn.to/4pXCEdS">Frictionless</a>: the best way to find developer friction is by asking developers. Not surveying them. Not tracking their metrics. Actually sitting down and listening.</p><p>Start with a listening tour. Your goal is to understand your developers software development experience, or as <a href="https://www.linkedin.com/in/courtney-kissler/">Courtney Kissler</a>, SVP and CIO at Expeditors, says, &#8220;<em>Honor their reality.</em>&#8221;</p><p>Listen without defensiveness or explanations, and remember it&#8217;s not your role to judge their reality. Stay curious, asking for details when needed. Note helpful practices, but avoid soliciting solutions, neither you nor they have the full picture yet, and they aren&#8217;t thinking like strategic product owners.</p><p>This isn&#8217;t about being nice. This is about seeing what&#8217;s actually broken before you waste another quarter implementing solutions to problems you&#8217;ve imagined rather than verified.</p><h2>The 12-15 developer rule</h2><p>Here&#8217;s where most organizations go wrong: they either talk to too few developers (missing critical perspectives) or approach it haphazardly (getting skewed data).</p><p>Forsgren and Noda are specific about this. To be most effective, you&#8217;ll want to talk to 12-15 developers. And to ensure you have a representative sample, plan strategically &#8212; consider factors such as years of experience, product area, type of development (like legacy, mobile, cloud, data science).</p><p>Think about that number for a second. Twelve to fifteen conversations. That&#8217;s maybe six hours of your time spread across a few weeks. Six hours to understand where millions of dollars in productivity are leaking out of your organization.</p><p>You need two planning lists:</p><ol><li><p>Demographics that matter. For example, work location (onsite, remote, hybrid) and experience level (junior or senior devs).</p></li><li><p>Work types and organizational areas where devs work.</p></li></ol><p>These lists help you map out your interview coverage and track who you&#8217;ve spoken with. You&#8217;ll quickly see if you&#8217;re relatively balanced between new hires and long-time employees, but don&#8217;t have interviews scheduled with junior devs in platform and legacy teams. That may be intentional, or it might be a simple oversight. Having this visualization helps you spot those gaps and correct them before you make decisions based on incomplete data.</p><p>The structure matters. You&#8217;re not trying to be exhaustive, you&#8217;re trying to be representative. You want to hear from the senior architect who&#8217;s been there eight years AND the mid-level engineer who just joined. You want perspectives from the team shipping customer features AND the platform team building internal tools.</p><h2>What you&#8217;ll actually learn</h2><p>When you actually do this, when you shut up and listen, you discover something uncomfortable: the friction your developers experience often has nothing to do with the problems you thought you were solving.</p><p>The research from Atlassian shows the top time-wasters for developers are finding information (services, docs, APIs), adapting new technology, and context switching between tools. Notice what&#8217;s not on that list? Coding. Developers only spend 16% of their time actually writing code, and it&#8217;s not even registering as a friction point.</p><p>So all those coding assistants you&#8217;ve invested in? They&#8217;re enhancing the experience without actually improving it, because you&#8217;ve been optimizing for the wrong 16%.</p><p>Meanwhile, the real problems, scattered documentation, unclear service boundaries, constant tool-hopping&#8212;go unaddressed. Tech debt, which used to dominate friction surveys, has actually fallen out of the top five issues. Not because it&#8217;s solved, but because other problems have gotten worse.</p><h2>The empathy gap is getting worse</h2><p>Here&#8217;s the part that should worry you: this disconnect between what developers experience and what leaders think they experience is accelerating. Leaders are banking AI-driven time savings while actual friction increases. It&#8217;s a false economy where developers are expected to deliver faster while navigating more unaddressed obstacles.</p><p>The gap exists because leaders stopped doing the most basic thing: asking. They&#8217;re making assumptions based on their own outdated experience of what development felt like when they were writing code, or they&#8217;re relying on proxy metrics that tell them everything except what developers actually need.</p><p>Let developers know that talking to you is a good use of their time. You can set yourself up for success by sending out short notes introducing yourself and your work and asking for a quick, 30-minute meeting. Depending on the team and organizational culture, these notes will be either well-received or met with skepticism, developers have seen a lot of initiatives come and go.</p><p>But here&#8217;s what changes the equation: when you actually follow through. When developers see that someone listened, understood, and changed something tangible as a result, the skepticism dissolves. Not because of what you promised, but because of what you delivered.</p><h2>Your December resolution</h2><p>This December, while you&#8217;re making time for everyone else who matters in your life, make time for the people building your product. Not in a Zoom all-hands. Not through an engagement survey. In actual conversations where you&#8217;re there to learn, not to sell them on your roadmap.</p><p>Twelve to fifteen conversations. That&#8217;s your winter project. Before you plan Q1&#8217;s tooling investments, before you finalize next year&#8217;s platform roadmap, before you commit to another initiative that developers will silently work around, go understand their reality.</p><p>Because the friction you can&#8217;t see is costing you more than you think. And the only way to see it is to ask.</p><h2>Make it count</h2><p>The best part? You don&#8217;t need budget approval for this. You don&#8217;t need executive buy-in. You don&#8217;t need a consultant. You just need to block time on your calendar and send some emails.</p><p>Start with your own team if you&#8217;re nervous. Pick three people from different parts of your organization. Ask them what slows them down. Ask them where the process breaks. Ask them what they&#8217;d fix if they could wave a magic wand.</p><p>Then actually listen. Take notes. Don&#8217;t defend. Don&#8217;t explain. Don&#8217;t problem-solve in the moment.</p><p>Just listen.</p><p>The gifts you give your developers this season don&#8217;t come wrapped in boxes. They come wrapped in attention, understanding, and the promise that someone actually cares about making their work better.</p><p>That&#8217;s the kind of gift that keeps giving long after January.</p><div><hr></div><p><em>Want to dive deeper into how to structure these conversations? Nicole Forsgren and Abi Noda&#8217;s &#8220;<a href="https://amzn.to/4pXCEdS">Frictionless</a>&#8221; offers a comprehensive framework for discovering and addressing developer friction at every level of your organization. Also don&#8217;t miss to <a href="https://developerexperiencebook.com/">download a free workbook</a> with practical exercises and templates to guide your listening tour.</em></p><div><hr></div><h3>Taking a Winter Break</h3><p>This is my last post for 2025. I&#8217;ll be taking a break over the holidays and will return with fresh content in January.</p><p>But I don&#8217;t want the conversation to stop here. <strong>What topics would you like me to tackle in 2026?</strong> Whether it&#8217;s a specific developer experience challenge you&#8217;re facing, an organizational pattern you&#8217;ve observed, or something you&#8217;d like a pragmatic take on &#8212; <em>I want to hear from it</em>.</p><p><strong><a href="https://forms.gle/1BLmAxgdozwUSd158">Submit your topic idea here</a></strong> and I might write about it in an upcoming post.</p><h2>A Thank You to My Readers</h2><p>To everyone who&#8217;s read, subscribed, and engaged with these posts over the past few months, thank you. Your support means a lot to me.</p><p>As a year-end thank you, I&#8217;m offering <strong>70% off on my monthly and annual subscriptions until December 31st</strong>. If you&#8217;ve been considering supporting this work, now&#8217;s the time: <strong><a href="https://blog.pragmaticdx.com/9e3e9ba6">Get 70% off your subscription</a></strong></p><p>See you in 2026. Have a great holiday season, and may your on-call rotations be quiet.</p><p>Cheers Marcel</p>]]></content:encoded></item><item><title><![CDATA[The AI Feedback Loop That Isn't Working Yet]]></title><description><![CDATA[Why developers are slower with AI tools despite believing they're faster and what actually works]]></description><link>https://blog.pragmaticdx.com/p/the-ai-feedback-loop-that-isnt-working</link><guid isPermaLink="false">https://blog.pragmaticdx.com/p/the-ai-feedback-loop-that-isnt-working</guid><dc:creator><![CDATA[Marcel Hauri]]></dc:creator><pubDate>Tue, 25 Nov 2025 20:38:38 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f263acc9-d473-485b-acae-7878f41c0c69_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Developers have a time problem. Not the &#8220;<em>I need more hours in the day</em>&#8221; kind&#8212;though that&#8217;s true too, but a feedback problem that costs them hours or even days of productive work.</p><p>When you submit code changes, you wait. The CI/CD pipeline runs its tests. Sometimes it fails in the final stage after hours of processing. You get a cryptic error log. You debug. You resubmit. You wait again.</p><p>This is the expensive reality of modern software development. A <a href="https://dl.acm.org/doi/10.1145/3691620.3695606">2024 research paper from Chalmers University</a> identified this pattern, noting that &#8220;<em>developers often seek expedited results from these pipelines</em>&#8221;, but the architecture of most CI/CD systems works against this preference.</p><p>Now we have data showing the problem is more complex than anyone predicted.</p><h2>The Real Cost: Time Saved, Time Lost</h2><p><a href="https://www.atlassian.com/blog/developer/developer-experience-report-2025">Atlassian&#8217;s 2025 State of Developer Experience survey</a> found that AI is saving developers approximately 10 hours per week. That sounds like unqualified success, until you see the other half of the equation.</p><p>The same survey found that 50% of developers report losing 10+ hours per week to organizational inefficiencies, finding information, adapting new technology, and context switching between tools. Developers are saving 10 hours a week with AI and losing 10 hours a week to organizational friction.</p><p>We&#8217;re right back where we started, except now there&#8217;s an illusion of progress.</p><p>Most organizations aren&#8217;t using AI to address friction points, they&#8217;re using it to speed up the parts that weren&#8217;t actually bottlenecks. Developers only spend about 16% of their time coding, and coding isn&#8217;t their primary friction point. Yet that&#8217;s where most AI investment goes.</p><h2>The Trust Problem</h2><p>Developer sentiment tells another part of this story. <a href="https://survey.stackoverflow.co/2025/ai">Positive sentiment for AI tools has decreased in 2025 to just 60%</a>, down from over 70% in both 2023 and 2024. More developers now actively distrust the accuracy of AI tools (46%) than trust it (33%).</p><p>Experienced developers are the most cautious, with only 2.6% reporting they &#8220;<em>highly trust</em>&#8221; AI output and 20% reporting they &#8220;<em>highly distrust</em>&#8221; it. This widespread understanding that AI outputs require human verification explains why experienced developers often slow down&#8212;they&#8217;re doing additional verification work.</p><p>As <a href="https://simonwillison.net/2025/Jul/21/coding-with-llms/">Salvatore Sanfilippo observed</a>, while LLMs can write parts of a codebase successfully under strict supervision, &#8220;<em>when left alone with nontrivial goals they tend to produce fragile code bases that are larger than needed, complex, full of local minima choices, suboptimal in many ways</em>&#8221;.</p><h2>What Actually Works: CI/CD Integration</h2><p>Despite the challenges, some applications show genuine promise. The vision of LLMs embedded in CI/CD pipelines has moved from theory to practice.</p><p><a href="https://medium.com/@API4AI/ai-powered-code-reviews-2025-key-llm-trends-shaping-software-development-eac78e51ee59">Tools now embed LLM-powered code reviews into CI/CD workflows</a>, ensuring code quality checks happen automatically with every commit. One finance company <a href="https://markaicode.com/ai-driven-devops-automate-cicd-pipelines-llms/">reduced build failures by 47%</a> after implementing LLM-based self-healing pipelines, with engineers saving 7.5 hours weekly.</p><p><a href="https://craft.faire.com/automated-code-reviews-with-llms-cf2cc51bb6d3">Faire&#8217;s implementation of automated code reviews</a> demonstrates how this works in practice. They use LLMs to automate generic review requirements, the checks that don&#8217;t require deep project context but do consume reviewer time. This frees human reviewers to focus on architectural decisions and whether code actually meets product requirements.</p><p>The difference? Integration into existing workflows rather than standalone tools, focus on organizational friction points rather than individual productivity, and automation of repetitive checks rather than replacement of human judgment.</p><h2>Log Analysis: Where AI Actually Excels</h2><p>One area where AI demonstrates clear value is log analysis, exactly what the Chalmers research identified as a key opportunity.</p><p><a href="https://www.mdpi.com/2624-800X/5/3/55">Recent studies show LLMs achieve an F1-score of 0.928</a> for vulnerability detection in log analysis, significantly outperforming traditional models like XGBoost (0.555) and LightGBM (0.432).</p><p>IBM&#8217;s production deployment provides real-world validation. <a href="https://research.ibm.com/publications/performance-optimizations-for-scaling-llm-based-log-analytics-tool">By December 2024, their LLM-based log analysis tool had processed 1,376 cases</a>, handling 877 GB of data and 1.04 billion log lines. Among respondents, 53.79% found the tool beneficial, and 60.4% of products reported saving at least 30 minutes per trigger.</p><p>Why does log analysis work when other applications struggle? Three factors: defined scope with clear inputs and outputs, natural language advantage since logs are semi-structured text, and straightforward verification paths.</p><h2>The Learning Curve Nobody Discussed</h2><p>The <a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/">METR study</a> found that three-quarters of participants saw reduced performance when using AI tools. However, one of the top performers with AI had the most previous Cursor experience. The paper acknowledges: &#8220;<em>it&#8217;s plausible that there is a high skill ceiling for using Cursor, such that developers with significant experience see positive speedup</em>&#8221;.</p><p>Amazon&#8217;s experience with their Q coding assistant tells a similar story. After significant improvements in April 2025, about half of developers found it genuinely helpful, but <a href="https://newsletter.pragmaticengineer.com/p/software-engineering-with-llms-in-2025">it still has limitations</a>, including understanding only one file at a time. Interestingly, models fine-tuned on Amazon&#8217;s own massive codebase &#8220;<em>feel only moderately better than non-trained models</em>&#8221;.</p><p>Effective AI-assisted development requires significant practice with specific tools, understanding of tool limitations, workflow integration rather than skill replacement, and context management most developers haven&#8217;t mastered.</p><h2>The Empathy Gap</h2><p>Perhaps most concerning: <a href="https://www.atlassian.com/blog/developer/developer-experience-report-2025">63% of developers now say leaders don&#8217;t understand their pain points</a>, up sharply from 44% in 2024.</p><p>This widening empathy gap explains why AI deployment often misses the mark. Leaders see developers using AI and assume productivity is improving. Developers experience the slowdown, the verification work, the context-switching overhead&#8212;but their perception doesn&#8217;t match reality.</p><p>The <a href="https://blog.jetbrains.com/research/2025/10/state-of-developer-ecosystem-2025/">JetBrains 2025 Developer Ecosystem survey</a> found that 66% of developers don&#8217;t believe current metrics reflect their true contributions. While tech decision-makers dream of reducing technical debt, developers want transparency, constructive feedback, and clarity of goals.</p><p>Internal collaboration, communication, and clarity are now just as important as faster CI pipelines or better IDEs. Yet organizations continue to invest primarily in the latter.</p><h2>What This Means for Engineering Leaders</h2><p><strong>Measure what matters</strong>: If developers take longer with AI but believe they&#8217;re faster, your productivity metrics aren&#8217;t capturing reality. Time-to-completion matters, but so do code quality, maintainability, and developer confidence.</p><p><strong>Focus on friction, not features</strong>: Developers lose time to finding information, adapting new technology, and context switching, none of which AI coding assistants address. The time saved writing code gets consumed by organizational inefficiency.</p><p><strong>Integration over innovation</strong>: The most successful AI deployments integrate into existing workflows. Faire&#8217;s automated code reviews work because they happen within the pull request process developers already use.</p><p><strong>The learning curve is real</strong>: Don&#8217;t expect immediate productivity gains. Developers need significant experience with specific AI tools before seeing benefits. Budget for training time.</p><p><strong>Trust the skeptics</strong>: Experienced developers are the most cautious about AI tools&#8212;and they&#8217;re often right to be. Their skepticism reflects understanding of where AI helps and where it introduces problems.</p><h2>The Path That Actually Works</h2>
      <p>
          <a href="https://blog.pragmaticdx.com/p/the-ai-feedback-loop-that-isnt-working">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[How to Actually Protect Focus Time]]></title><description><![CDATA[The three-layer system, exact scripts, and implementation playbook for protecting your focus when everyone's convinced their request is the exception.]]></description><link>https://blog.pragmaticdx.com/p/how-to-actually-protect-focus-time</link><guid isPermaLink="false">https://blog.pragmaticdx.com/p/how-to-actually-protect-focus-time</guid><dc:creator><![CDATA[Marcel Hauri]]></dc:creator><pubDate>Tue, 18 Nov 2025 11:02:46 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e4d515ee-04ad-4a2c-9261-7f8797d9ab15_500x500.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You&#8217;ve read about why <a href="https://blog.pragmaticdx.com/p/the-problem-of-interruptions">interruptions wreck productivity</a>. You understand the research. The problem is that knowing doesn&#8217;t help when your product manager is standing at your desk asking for &#8220;<em>just a quick estimate</em>&#8221; while you&#8217;re debugging a race condition.</p><p>This isn&#8217;t another article explaining the interruption problem, this is the playbook for what to do when you&#8217;re surrounded by people who genuinely believe their requests justify breaking your focus. What follows are the specific tactics, scripts, and systems that work when good intentions collide with the need for uninterrupted thinking time.</p><p>The central challenge isn&#8217;t technical. Everyone interrupting you thinks they&#8217;re the exception. Your stakeholder&#8217;s deadline is real. Your manager&#8217;s question seems time-sensitive. Your colleague&#8217;s blocker feels urgent. They&#8217;re not being unreasonable, they&#8217;re operating in an environment that&#8217;s trained them to treat immediacy as importance.</p><h2>The Three-Layer Defense System</h2><p>Protecting focus time requires defense in depth. One-off tactics fail because your environment constantly pressure-tests your boundaries. You need three layers: structural defenses that prevent most interruptions, communication protocols that route the rest appropriately, and tactical responses for exceptions.</p><p>Most focus time advice stops at the tactical layer - giving you scripts without the infrastructure to make them work. That&#8217;s why you feel like you&#8217;re constantly negotiating instead of doing your job. The real leverage is in the first two layers.</p><h2>Layer One: The Calendar Architecture That Actively Defends</h2><p>Your calendar should be an active defense system, not a passive record of how your time gets stolen.</p><p><strong>The Focus Block Template</strong></p><p>Create recurring blocks with these specific properties:</p><ul><li><p>Title: &#8220;<em>Deep Work: [Specific Task]</em>&#8221; not &#8220;<em>Focus Time</em>&#8221; or &#8220;<em>Busy</em>&#8221; - specificity creates accountability</p></li><li><p>Visibility: Public so people see what you&#8217;re working on, but &#8220;<em>Show as Busy</em>&#8221; to discourage invites</p></li><li><p>Auto-decline: If supported (Google Workspace and Outlook both do), enable automatic meeting decline</p></li><li><p>Duration: Minimum 90 minutes - shorter blocks are optimistic theater given it takes 30-60 minutes to enter flow state</p></li></ul><p><strong>The Two-Block Daily Minimum</strong></p><p>Schedule blocks during your biological peak hours. For most developers: one morning block (9-11 AM) and one afternoon block (2-4 PM). <a href="https://arxiv.org/pdf/1805.05504">University of Calgary research</a> found that developers who face frequent interruptions show signs of mental fatigue much earlier, leading to more afternoon errors. Your afternoon focus block isn&#8217;t just about productivity - it&#8217;s about code quality.</p><p><strong>The Office Hours Buffer</strong></p><p>Here&#8217;s the piece most advice misses: schedule 30-minute &#8220;<em>office hours</em>&#8221; blocks immediately after each focus block. This gives you designated time to handle queued questions and gives others a specific answer (&#8221;<em>I&#8217;m checking messages at 11:30 and 4:30, can it wait until then?</em>&#8221;).</p><p>Post these everywhere: Slack status, email signature, team wiki, team calendar as public events. Make them impossible to claim you didn&#8217;t communicate.</p><p><strong>The Meeting Firewall</strong></p><p>Configure your calendar tool aggressively:</p><ul><li><p>Set working hours that match actual availability for meetings (10 AM-12 PM, 1 PM-4 PM)</p></li><li><p>Enable 15-minute buffers before and after meetings</p></li><li><p>Mark focus blocks as &#8220;<em>Private</em>&#8221; or &#8220;<em>Out of Office</em>&#8221; depending on auto-decline capabilities</p></li></ul><p>Result: people scheduling meetings see limited availability, naturally pushing routine conversations async or to office hours.</p><h2>Layer Two: The Communication Routing That Encodes Urgency</h2><p>Development teams have created informal hierarchies of communication tools, chat for immediate needs, ticketing for structured work, email for non-urgent items. Make this explicit and non-negotiable.</p><p><strong>The Channel Protocol</strong></p><p>Define and publish to your entire team:</p><ul><li><p><strong>Production alerts/PagerDuty</strong>: Immediate response required</p></li><li><p><strong>Direct phone call</strong>: True emergency blocking work right now</p></li><li><p><strong>Slack @mention with &#8220;URGENT&#8221;</strong>: Response within 2 hours (usable once per week maximum)</p></li><li><p><strong>Slack @mention (no URGENT)</strong>: Response by end of business day, triaged during office hours</p></li><li><p><strong>Slack without @mention</strong>: Response when convenient, typically next business day</p></li><li><p><strong>Team ticket/PM tool</strong>: Triaged during sprint planning or standups</p></li><li><p><strong>Email</strong>: 24-48 hour response time</p></li></ul><p><strong>The Team Agreement</strong></p><p>This only works with explicit commitment. Call a team meeting and get agreement on:</p><ol><li><p>The channel hierarchy</p></li><li><p>What constitutes &#8220;<em>urgent</em>&#8221; (production down, security issue, blocker for today&#8217;s ship date - that&#8217;s it)</p></li><li><p>Consequences for misusing urgent channels</p></li></ol><p>Document in your team handbook. Reference every time someone violates the protocol.</p><p><strong>The Async-First Default</strong></p><p>Before any synchronous communication, ask: &#8220;<em>Could this be a doc comment, PR comment, or ticket update?</em>&#8221; Create templates for common patterns:</p><ul><li><p>Estimate requests &#8594; Standard form with context, constraints, timeline</p></li><li><p>Design feedback &#8594; Shared doc with comment threads</p></li><li><p>Status updates &#8594; Automated from PM tool</p></li></ul><p>Make async easier than sync for routine requests.</p><h2>Layer Three: Tactical Scripts for When People Bypass Your System</h2><p>Even with strong defenses, you&#8217;ll face situations where someone goes around the system. You need responses that protect focus while preserving relationships.</p><p>These scripts work because they do three things: acknowledge the request, propose an alternative, create an emergency escalation path.</p><p><strong>For Slack Interruptions During Focus</strong></p><p>&#8220;<em>I&#8217;m in a focus block for the next 90 minutes on [specific task]. If production-critical, call my phone. Otherwise, picking this up at [specific time].</em>&#8221;</p><p>Variation for managers: &#8220;<em>Want to give this proper attention rather than rushed response. Deep in [task] until 2:30 - can I get back by 3:00 with complete answer?</em>&#8221;</p><p><strong>For In-Person &#8220;</strong><em><strong>Got a Minute?</strong></em><strong>&#8221; Interruptions</strong></p><p>&#8220;<em>I&#8217;m mid-task and will lose context if I stop. Drop it in Slack with details? Checking there at [time].</em>&#8221;</p><p>If they press: &#8220;<em>What&#8217;s the blocker if this waits 90 minutes? If it&#8217;s blocking deploy or production, let&#8217;s tackle now. Otherwise, I&#8217;ll lose an hour on [task] if I context-switch.</em>&#8221;</p><p><strong>For Meeting Requests During Protected Time</strong></p><p>&#8220;<em>Have this blocked for deep work on [task]. Can we solve async first via [doc/Slack thread]? If we need sync after, I&#8217;m open from [alternatives].</em>&#8221;</p><p>If they insist: &#8220;<em>To be transparent, moving this delays [deliverable] by [timeframe]. If [their topic] is more urgent than [your deliverable], I can shift - want to make that trade-off consciously.</em>&#8221;</p><p><strong>For &#8220;</strong><em><strong>Quick Estimate</strong></em><strong>&#8221; Requests</strong></p><p>&#8220;<em>Quick estimates tend to be wildly inaccurate. Can give you a range now with low confidence, or proper estimate in [timeframe] after thinking through dependencies. Which helps your planning more?</em>&#8221;</p><p><strong>For Colleagues Claiming Blocked Status</strong></p><p>&#8220;<em>Walk me through what you&#8217;ve tried and the specific error. Share in Slack with screenshots and I&#8217;ll troubleshoot at [time].</em>&#8221;</p><p>If they&#8217;ve tried nothing: &#8220;<em>Start with [documentation/runbook]. Most issues are covered there. Still stuck after checking? Ping me with what you tried and where it failed.</em>&#8221;</p><p><strong>For Leadership &#8220;</strong><em><strong>Quick Feedback</strong></em><strong>&#8221; Requests</strong></p><p>&#8220;<em>Can give gut reaction now or considered analysis after this focus block at [time]. Difference is probably 60% confidence versus 90%. Which do you need?</em>&#8221;</p><p><strong>For Product Prioritizing Their Feature</strong></p><p>&#8220;<em>Currently on [task] shipping [date]. If [their feature] is higher priority, I can switch, but [current task] slips to [new date]. Want to confirm that trade-off with [stakeholder] before switching?</em>&#8221;</p><h2>The Four-Step Implementation Playbook</h2>
      <p>
          <a href="https://blog.pragmaticdx.com/p/how-to-actually-protect-focus-time">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[The Hidden Cost of Adding Just One More Feature]]></title><description><![CDATA[Every team thinks one more feature will make the launch better. In reality, it&#8217;s the fastest way to delay, over-engineer, and burn out before you ever ship.]]></description><link>https://blog.pragmaticdx.com/p/the-hidden-cost-of-adding-just-one</link><guid isPermaLink="false">https://blog.pragmaticdx.com/p/the-hidden-cost-of-adding-just-one</guid><dc:creator><![CDATA[Marcel Hauri]]></dc:creator><pubDate>Tue, 11 Nov 2025 11:03:19 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b28cd177-86a6-48d2-bba2-66915158d694_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It&#8217;s mid-November. Your team is staring down a launch deadline that&#8217;s already slipped twice. Everyone&#8217;s exhausted. And then someone says it: &#8220;<em>You know what would really make this pop? If we just added...</em>&#8221;</p><p>Stop right there.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.pragmaticdx.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.pragmaticdx.com/subscribe?"><span>Subscribe now</span></a></p><p>I get it. The temptation is real. You&#8217;re <em>this close</em> to shipping, and suddenly everyone has brilliant ideas about what would make the product truly shine. Your CEO wants one more integration. Your lead engineer sees an opportunity to refactor &#8220;<em>while we&#8217;re in there anyway</em>&#8221;. Your product manager discovered a competitor feature you &#8220;<em>absolutely must have</em>&#8221;.</p><p>This is how launches die. Not with a bang, but with a whimper of &#8220;<em>just one more thing</em>&#8221;.</p><h2>The Math Nobody Wants to Do</h2><p>Let&#8217;s talk numbers, because feelings lie but data doesn&#8217;t.</p><p>Sonar&#8217;s <a href="https://www.sonarsource.com/blog/new-research-from-sonar-on-cost-of-technical-debt/">research</a> examining over 200 projects found that technical debt costs $306&#8217;000 annually for every million lines of code, which equals 5&#8217;500 developer hours spent on remediation instead of innovation. </p><p>Every &#8220;<em>quick feature</em>&#8221; you jam in before launch isn&#8217;t free. It&#8217;s a loan you&#8217;re taking out against your team&#8217;s future productivity, and the interest compounds faster than you think.</p><p><a href="https://medium.com/@wikifactory/the-effects-of-product-launch-delay-for-startups-and-how-to-prevent-it-38e6535bb3d4">Only 55% of product launches happen on schedule</a>, and delayed products miss their internal targets 20% of the time. Even worse? <a href="https://decode.agency/article/scope-creep-software-development/">A six-month launch delay costs companies 33% of expected revenue</a>.</p><p>That &#8220;<em>must-have</em>&#8221; feature isn&#8217;t just delaying your launch. It&#8217;s actively destroying value.</p><h2>Scope Creep Is the Silent Killer of Product Launches</h2><p>Here&#8217;s what actually happens when you add &#8220;<em>just one more feature</em>&#8221;:</p><p>You think you&#8217;re adding a week of work. But feature creep in complex technical projects can trigger <a href="https://www.dartai.com/blog/how-scope-creep-affect-project-success">18-month delays and 40% budget overruns</a>. The <a href="https://www.wrike.com/project-management-guide/faq/what-is-scope-creep-in-project-management/">infamous Denver International Airport</a> baggage system suffered over 2&#8217;000 design changes that turned the project into a cautionary tale.</p><p>This isn&#8217;t about perfectionism versus pragmatism. It&#8217;s about understanding that nearly 50% of projects now experience <a href="https://kissflow.com/project/avoid-scope-creep-in-project/">scope creep</a>, and that percentage is climbing, not falling.</p><p>Every addition triggers a chain reaction. Your &#8220;<em>simple</em>&#8221; feature needs integration testing with existing systems. It needs documentation updates. It needs QA cycles for edge cases. It needs deployment procedures. It needs support team training.</p><p>What looked like a three-day task becomes a three-week nightmare. Your launch date evaporates. Team morale craters. And your competitors ship while you&#8217;re still arguing about implementation details.</p><h2>Why &#8220;Just Ship It&#8221; Also Backfires</h2><p>Now, some of you are thinking: &#8220;<em>Fine, I&#8217;ll just ship whatever we have and fix it later</em>&#8221;.</p><p>Also wrong.</p><p>The &#8220;<em>ship it and iterate</em>&#8221; gospel has created its own problems. Yes, <a href="https://www.linkedin.com/pulse/arent-any-typos-essay-we-launched-too-late-reid-hoffman/">Reid Hoffman</a> famously said you should be embarrassed by your first version. But that advice gets weaponized into shipping genuinely broken products that damage your reputation before you&#8217;ve built one.</p><p>There&#8217;s a spectrum here. On one end, you&#8217;ve got perfectionist paralysis that prevents you from ever launching. On the other, you&#8217;ve got reckless shipping that burns through user goodwill faster than you can acquire users.</p><p>The sweet spot? Ship something focused that actually works.</p><p>Organizations with high technical debt spend <a href="https://fullscale.io/blog/technical-debt-quantification-financial-analysis/">40% more on maintenance and deliver features 25-50% slower</a> than competitors. That &#8220;<em>move fast</em>&#8221; culture becomes &#8220;<em>move slow and break things</em>&#8221; real quick.</p><h2>The Holiday Crunch Amplifies Everything</h2><p>Right now, as you read this, you&#8217;re probably in the worst possible position to make good decisions about scope.</p><p>End-of-year targets. Holiday parties. People taking time off. The pressure to &#8220;<em>finish strong</em>&#8221; and prove the year wasn&#8217;t wasted. Everyone&#8217;s tired, burned out, and desperate to show progress.</p><p>This is exactly when teams make catastrophic calls. You add features to justify the timeline slip. You skip proper testing because &#8220;<em>we&#8217;ll catch it in January</em>&#8221;. You compromise on architecture because &#8220;<em>we just need to get this out</em>&#8221;</p><p>The features you&#8217;re considering adding right now, in November, under deadline pressure, with half your team already mentally checked out for the holidays, are the technical debt you&#8217;ll be paying off all of next year.</p><h2>What Actually Works</h2><p>Here&#8217;s the contrarian take: the best thing you can do before launch is cut scope, not add to it.</p>
      <p>
          <a href="https://blog.pragmaticdx.com/p/the-hidden-cost-of-adding-just-one">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Make the Easy Path the Right Path]]></title><description><![CDATA[Success isn&#8217;t about pushing developers harder, it&#8217;s about shaping the system around them. When doing the right thing feels effortless, excellence becomes inevitable.]]></description><link>https://blog.pragmaticdx.com/p/make-the-easy-path-the-right-path</link><guid isPermaLink="false">https://blog.pragmaticdx.com/p/make-the-easy-path-the-right-path</guid><dc:creator><![CDATA[Marcel Hauri]]></dc:creator><pubDate>Tue, 04 Nov 2025 11:01:10 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/460870f9-3b72-46b2-8d9b-214b3714dbea_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Developers under time pressure will take shortcuts, skipping tests, hardcoding configs, bypassing CI, if those paths offer less resistance. But the world&#8217;s most successful tech companies have cracked the code: they&#8217;ve engineered their platforms so that doing the right thing is also the fastest, smoothest, and most obvious way forward. This isn&#8217;t just good developer experience, it&#8217;s a competitive advantage with tangible financial impact. Large technology companies report multimillion-dollar savings from reduced cycle times, lower maintenance overhead, and improved developer retention.</p><p>The concept is simple yet profound: when best practices become the path of least resistance, teams naturally &#8220;<em>fall into</em>&#8221; success. Spotify doubled deployment frequency. Netflix reduced test cycles from 62 minutes to under 5 minutes. Atlassian achieved 100% security scanning coverage while reducing critical vulnerabilities by 39%. These examples show what&#8217;s possible when teams align their systems with developer needs, evidence that when best practices are frictionless, measurable improvement tends to follow.</p><p>This article covers proven strategies from Google, Netflix, Stripe, Spotify, and other leading companies, examining the specific tools, tactics, and design principles that reduce friction while elevating quality. The findings reveal a consistent pattern: <strong>organizations that invest heavily in developer productivity don&#8217;t just move faster, they build better software with happier teams.</strong></p><h2>Designing for the Pit of Success</h2><p>Rico Mariani, a performance architect at Microsoft, coined the term &#8220;<em><a href="https://ricomariani.medium.com/pit-of-success-for-organizations-a046a0eae7b2">Pit of Success</a></em>&#8221; around 2003 to describe a radical design philosophy: <strong>build platforms where developers simply fall into winning practices</strong>. As he wrote, &#8220;<em>We want our customers to simply fall into winning practices by using our platform and frameworks. To the extent that we make it easy to get into trouble we fail</em>&#8221;.</p><p>This inverts the traditional approach to software quality. Instead of requiring heroic effort to write good code, the system makes it nearly impossible to write bad code. The pattern extends far beyond language design&#8212;Netflix&#8217;s <a href="https://netflixtechblog.com/the-show-must-go-on-securing-netflix-studios-at-scale-19b801c86479">Wall-E platform</a> makes security and compliance part of the default service setup, automating practices that once required manual checks.</p><p>Alan, a French digital health-insurance platform, embedded &#8220;<em><a href="https://medium.com/alan/falling-into-the-pit-of-success-726dda873fae">Falling into the pit of success</a></em>&#8221; as a core engineering principle, even creating a Slack emoji to keep the concept culturally present. The philosophy spreads beyond just tooling to influence hiring, architecture decisions, and process design. </p><div class="pullquote"><p><strong>The goal is simple <br></strong>Developers must actively work to do the wrong thing, because the default path is inherently correct.</p></div><p>The contrast illuminates the power of this approach. In a &#8220;<em>Pit of Despair</em>&#8221;, developers constantly risk falling into traps, manual configuration errors, security vulnerabilities from using default settings, or performance issues from following standard patterns. Teams spend enormous energy climbing out of holes. But in a Pit of Success, <strong>gravity works in your favor</strong>. </p><p>New developers make fewer mistakes. Senior developers move faster. Quality improves not because people try harder, but because the system guides them.</p><h2>Golden Paths and Paved Roads to Effortless Best Practices</h2><p>Spotify and Netflix independently arrived at the same insight, though they use different names for it. Spotify calls it the &#8220;<em><a href="https://engineering.atspotify.com/2020/08/how-we-use-golden-paths-to-solve-fragmentation-in-our-software-ecosystem">Golden Path</a></em>&#8221;, an opinionated, well-documented, supported way to build software. Netflix calls it the &#8220;<em><a href="https://netflixtechblog.com/the-show-must-go-on-securing-netflix-studios-at-scale-19b801c86479">Paved Road</a></em>&#8221; a formalized set of commitments between platform teams and engineering teams. Both describe the same fundamental concept: <strong>create workflows so polished and efficient that deviation becomes unappealing</strong>.</p><p>The implementation details matter enormously. Back in 2017 Netflix built a complete Platform-as-a-Service with standardized components: RPC, service discovery, monitoring, logging, all pre-assembled and version-compatible.  Their tool bootstraped projects with relevant hooks, CI integration, and desktop setup.</p><p>Spotify&#8217;s approach centered on <a href="https://backstage.io/">Backstage</a>, an open-source developer portal that houses their Golden Paths. The platform provides software templates for creating new services with best practices pre-configured, a software catalog tracking all components with metadata, and integrated documentation following a docs-as-code approach.</p><p><a href="https://www.redhat.com/en/topics/platform-engineering/golden-paths">Red Hat</a>&#8217;s definition captures the essential characteristics of effective Golden Paths: they must be optional (not mandatory), transparent (developers understand what happens under the hood), extensible (adaptable when needed), and minimal in cognitive load. <strong>The critical success factor is treating Golden Paths as products, not mandates</strong>. When platform teams use <a href="https://cloud.google.com/blog/products/application-development/golden-paths-for-engineering-execution-consistency">product thinking</a>&#8212;user research, roadmaps, feedback loops, internal marketing&#8212;adoption mostly follows naturally.</p><p>The approach works when workflows are repetitive and high-frequency, best practices are well-established, and the majority of teams share similar needs. It fails when implemented as &#8220;<em>Golden Cages</em>&#8221;, rigid, one-size-fits-all solutions with no escape hatches.</p><h2>Real-world impact through measurable returns from reduced friction</h2><p>The business case for making the easy path the right path rests on hard numbers, not aspirations. A <a href="https://tei.forrester.com/go/Cortex/IDP/?lang=en-us">Forrester Total Economic Impact study</a> commissioned by Cortex reported a composite organization achieving 224% ROI over three years, with a six-month payback period. While vendor-commissioned, the study&#8217;s methodology was based on interviews with four organizations and provides insight into the scale of potential returns.</p><p><a href="https://www.simform.com/blog/etsy-devops-case-study/">Etsy&#8217;s</a> transformation from weekly deployments (often with downtime) to continuous deployment tells a compelling story about cultural and technical change. In 2012, they executed <strong>6,419 production deployments&#8212;averaging 25 per workday and 535 per month</strong>&#8212;with 196 different people deploying to production. They run 14,000 tests daily. The journey took two years and required comprehensive monitoring with Graphite and StatsD, feature flags for safer deployments, automated CI/CD pipelines, and a blameless postmortem culture. The key insight: small, frequent changes reduce risk more than large, infrequent ones.</p><p><a href="https://gradle.com/blog/netflix-pursues-soft-devex-goals-with-hard-devprod-metrics-using-test-distribution/">Netflix</a> achieved a 92% reduction in test cycle time through Test Distribution and Developer Productivity Engineering (DPE) techniques. Tests that took 62 minutes now complete in under 5 minutes by parallelizing execution across scalable cloud pools and automating feedback cycles. The analysis revealed that <strong>90% of build time was spent in tests</strong>, making this the highest-impact optimization area. The ROI proved immediate because commoditized cloud compute costs far less than expensive engineering time.</p><p><a href="https://engineering.salesforce.com/how-ai-test-automation-cut-developer-productivity-bottlenecks-by-30-at-scale/">Salesforce</a> tackled a different problem at massive scale with their TF Triage Agent, an AI-powered system for automated test failure analysis. Processing <strong>6 million tests daily with 78 billion combinations, handling 150,000 test failures per month and 27,000 changelists per day</strong>, the system provides developers with concrete recommendations within seconds about which code changes likely caused failures. The phased rollout from a 20-person team to 500+ engineers delivered a <strong>30% reduction in test failure resolution bottlenecks</strong>. The key to success: concrete recommendations beat speculative AI outputs, and integration with Salesforce-specific data improved accuracy.</p><p>Toyota Motor North America&#8217;s <a href="https://backstage.spotify.com/discover/blog/adopter-spotlight-toyota/">Backstage</a> implementation generated <strong>$10 million</strong> in savings in 2022 through standardization at enterprise scale. A financial services company with over $1.71 Trillion in assets headquartered in San Francisco, California using <a href="https://www.accelq.com/casestudy/microsoft-financial-services/">ACCELQ</a> for Microsoft Dynamics 365 test automation reduced regression script generation time by 75% (from 16 hours to 4 hours) while achieving 80%+ test reuse through modularity.</p><h2>Rapid feedback loops turn testing into the faster option</h2><p>The mathematics of developer productivity center on feedback loops. </p><p><a href="https://claroty.com/blog/engineering-speed-up-your-ci-cd-pipeline">Claroty Engineering&#8217;s </a>journey from 20+ minute pipelines to under 10 minutes offers a blueprint for optimization. They profiled bottlenecks using cProfile and SQLTap, ran faster static tests (linters, type checks) before expensive integration tests, implemented fail-fast with pytest -x to stop at first failure, used RAMFS to run databases in memory (doubling test speed), split jobs across isolated AWS environments using GitLab&#8217;s parallel feature, and optimized database operations&#8212;truncating tables instead of dropping/recreating achieved an <strong>82% speed improvement</strong>.</p><p><a href="https://circleci.com/blog/ci-cd-at-scale-circleci-vs-github-actions/">CircleCI&#8217;s</a> performance benchmarks against GitHub Actions reveal the impact of infrastructure choices. CircleCI executes pipelines <strong>40.29% faster at median</strong> compared to GitHub Actions default runners, and 2.09% faster even against GitHub&#8217;s larger runners despite using less RAM. More dramatically, CircleCI shows <strong>90.13% less queuing than GitHub Actions default and 99.12% less than larger runners</strong>, with consistent sub-30-second queue times under load versus 22+ minute waits on GitHub Actions.</p><p>The pattern across successful implementations is consistent: caching dependencies saves 5-10 minutes per build, parallel execution splits test suites to run simultaneously, Claroty&#8217;s smart test selection runs only affected tests, and cloud-based platforms like BrowserStack and LambdaTest provide 3500+ real browsers and devices for parallel cross-browser testing. <a href="https://medium.com/@patrickkoss/how-to-make-faster-ci-cd-pipelines-adfdb7f2f9f9">Patrick Koss</a> documented reducing CI time from 15-20 minutes to 3-4 minutes through caching (Golangci-Lint, dependencies, Docker layers), parallel execution, and optimized resource allocation.</p><h2>Shift-left done right so security never slows teams down</h2><p>Security traditionally operated as a gate at the end of development, creating adversarial relationships between security teams and developers. Modern approaches shift security left into the development process itself, but success requires making security scanning <strong>faster and more helpful than skipping it</strong>.</p><p><a href="https://snyk.io/case-studies/atlassian/">Snyk&#8217;s</a> integration with Atlassian demonstrates the power of this approach. Atlassian reached <strong>100% container scanning coverage across the organization while reducing high-severity vulnerabilities by 65% and critical-severity vulnerabilities by 39%</strong>, all within a few months. The implementation automated scanning during deployment events and created tickets with metadata on severity and priority scores. Atlassian chose Snyk specifically because it proved &#8220;<em>easier for developers to integrate into their pipelines</em>&#8221;.</p><p>The shift-left philosophy succeeds when it follows five principles: integrate at code time (scan before commit/push), use native environments (results in IDE or PR, not separate portals), provide actionable feedback (tell developers what to fix and how), automate policy (policy-as-code sets guardrails without manual gates), and deliver fast feedback (sub-second to seconds, not minutes).</p><p>Real implementations follow a common pattern. IDE integration catches issues as code is written through plugins. Pre-commit hooks run quick checks before code leaves the developer&#8217;s machine. PR automation runs full scans on pull requests with findings in comments. Policy gates block merges on critical/high issues only, not all findings. Scheduled deep scans run full DAST and penetration tests regularly. Production monitoring provides runtime protection through RASP and IAST.</p><p>The cultural transformation proves as important as the technical implementation. Breaking down silos between security and development teams, making security enable rather than block innovation, building &#8220;<em>security fluency</em>&#8221; for developers without making them experts, adopting shared responsibility models, and maintaining observability for continuous improvement, <strong>these organizational changes determine whether tools actually reduce friction or simply create new bottlenecks</strong>.</p><h2>Platform engineering means treating internal tools as products</h2><p>The most profound shift in modern developer experience comes from treating internal platforms not as cost centers or IT infrastructure, but as <strong>products with developers as customers</strong>. This mental model transformation changes everything about how platforms are built, measured, and evolved. </p>
      <p>
          <a href="https://blog.pragmaticdx.com/p/make-the-easy-path-the-right-path">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Three Forces That Quietly Shape How Developers Work]]></title><description><![CDATA[Max Kanat-Alexander breaks down why cycle time, focus, and cognitive load determine whether your engineering team thrives or burns out]]></description><link>https://blog.pragmaticdx.com/p/three-forces-that-quietly-shape-how</link><guid isPermaLink="false">https://blog.pragmaticdx.com/p/three-forces-that-quietly-shape-how</guid><dc:creator><![CDATA[Marcel Hauri]]></dc:creator><pubDate>Tue, 28 Oct 2025 11:03:12 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/72cc11cf-63e1-4610-9d2b-6ffabb899b18_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Recently I watched <a href="https://maxkanatalexander.com/">Max Kanat-Alexander</a>&#8217;s talk on developer experience, and it resonated deeply with me. </p>
      <p>
          <a href="https://blog.pragmaticdx.com/p/three-forces-that-quietly-shape-how">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[How Developer Experience Measurement Delivers Real Impact]]></title><description><![CDATA[Engineering teams waste time tracking vanity metrics instead of the factors research shows actually predict productivity and retention.]]></description><link>https://blog.pragmaticdx.com/p/how-developer-experience-measurement</link><guid isPermaLink="false">https://blog.pragmaticdx.com/p/how-developer-experience-measurement</guid><dc:creator><![CDATA[Marcel Hauri]]></dc:creator><pubDate>Tue, 21 Oct 2025 10:01:55 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3bcf443c-ab13-4edd-969f-32be81383f09_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The biggest challenge isn&#8217;t finding metrics. It&#8217;s that we&#8217;re measuring the wrong things. I&#8217;ve reviewed dozens of DevEx measurement programs, and the pattern is consistent: teams obsess over easy-to-measure proxies while ignoring factors that actually drive business outcomes.</p><p>Take lines of code. Easy to measure, feels important, every platform provides data. So teams track it religiously. The problem? LOC tells you nothing about value delivered. You can optimize LOC by writing verbose, repetitive code, but you haven&#8217;t improved developer experience. You&#8217;ve just gamed a metric.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.pragmaticdx.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Pragmatic Developer Experience is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Developer experience is multidimensional. As researchers Nicole Forsgren and colleagues discovered developing the <a href="https://queue.acm.org/detail.cfm?id=3454124">SPACE framework</a>, productivity encompasses satisfaction and well-being, performance outcomes, activity levels, communication patterns, and flow states. Tr&#8230;</p>
      <p>
          <a href="https://blog.pragmaticdx.com/p/how-developer-experience-measurement">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Code Reviews Are Slow Because Everything Else Is Broken]]></title><description><![CDATA[Code reviews aren't slow because developers are lazy. They're slow because they expose every dysfunction in how your team actually works.]]></description><link>https://blog.pragmaticdx.com/p/code-reviews-are-slow-because-everything</link><guid isPermaLink="false">https://blog.pragmaticdx.com/p/code-reviews-are-slow-because-everything</guid><dc:creator><![CDATA[Marcel Hauri]]></dc:creator><pubDate>Tue, 14 Oct 2025 10:01:23 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/455d97e8-cbf1-4262-8180-1cfe28e29fc8_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Every engineering team I&#8217;ve worked with struggles with the same code review problems. PRs sit idle for days. Developers get blocked waiting for approvals. Someone posts &#8220;<em>Can anyone review my PR?</em>&#8221; in Teams. The team discusses it in retro, sets review SLAs, mandates smaller PRs, rotates review responsibilities. Six months later, you&#8217;re having the exact same conversation.</p><p>Here&#8217;s what I&#8217;ve learned: code review problems are rarely about the review process itself. They&#8217;re symptoms of deeper issues in how your team works.</p><h2><strong>What Actually Happens When a PR Lands</strong></h2><p>Let&#8217;s be honest about the reality. You open a PR and see 47 files changed, 1,200 lines added. The description says &#8220;<em>fixes issue #347</em>&#8221; with no context. You don&#8217;t know what issue #347 is without opening three other tabs. The code touches a part of the system you&#8217;ve never worked on. There are no tests, or the tests exist but you can&#8217;t tell what they&#8217;re validating.</p><p>Now multiply this by five other PRs in your queue, your own feature branch getting stale, and the production incident from this morning that needs a postmortem.</p><p>The problem isn&#8217;t that you&#8217;re slow at reviewing code. The problem is that reviewing this code properly would take longer than writing it from scratch, and everyone knows it. So the PR sits there radiating guilt while you both pretend you&#8217;ll get to it &#8220;<em>tomorrow</em>&#8221;.</p><h2><strong>Why Standard Solutions Don&#8217;t Work</strong></h2><p>The common code review practices aren&#8217;t inherently bad, they&#8217;re just incomplete. They treat symptoms without addressing root causes.</p><p>Smaller PRs are genuinely helpful when your architecture supports incremental delivery. But if your monolith has tight coupling everywhere, breaking work into smaller chunks becomes artificial. Developers either create PRs that can&#8217;t be understood without the next three in the sequence, or they give up and submit large PRs anyway.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!76-J!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4b37760-8a33-4f24-8c0f-f009f0c34036_1192x1028.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!76-J!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4b37760-8a33-4f24-8c0f-f009f0c34036_1192x1028.png 424w, https://substackcdn.com/image/fetch/$s_!76-J!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4b37760-8a33-4f24-8c0f-f009f0c34036_1192x1028.png 848w, https://substackcdn.com/image/fetch/$s_!76-J!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4b37760-8a33-4f24-8c0f-f009f0c34036_1192x1028.png 1272w, https://substackcdn.com/image/fetch/$s_!76-J!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4b37760-8a33-4f24-8c0f-f009f0c34036_1192x1028.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!76-J!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4b37760-8a33-4f24-8c0f-f009f0c34036_1192x1028.png" width="728" height="627.8389261744967" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d4b37760-8a33-4f24-8c0f-f009f0c34036_1192x1028.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:1028,&quot;width&quot;:1192,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:229482,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.pragmaticdx.com/i/175706795?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4b37760-8a33-4f24-8c0f-f009f0c34036_1192x1028.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!76-J!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4b37760-8a33-4f24-8c0f-f009f0c34036_1192x1028.png 424w, https://substackcdn.com/image/fetch/$s_!76-J!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4b37760-8a33-4f24-8c0f-f009f0c34036_1192x1028.png 848w, https://substackcdn.com/image/fetch/$s_!76-J!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4b37760-8a33-4f24-8c0f-f009f0c34036_1192x1028.png 1272w, https://substackcdn.com/image/fetch/$s_!76-J!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4b37760-8a33-4f24-8c0f-f009f0c34036_1192x1028.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">This <a href="https://mastodon.social/@liskin@genserver.social/115343921483715081">community exchange</a> captures the problem perfectly: breaking large PRs into stacks doesn&#8217;t eliminate complexity, it just moves it around.</figcaption></figure></div><p>Review SLAs can work when paired with quality metrics. The dysfunction happens when you measure speed without measuring depth. Teams start optimizing for &#8220;time to approval&#8221; and reviews become perfunctory rubber stamps. You&#8217;ve created an incentive for fast, shallow reviews instead of thorough ones.</p><p>Automated review assignment solves the diffusion of responsibility problem but creates a new one: context mismatch. You&#8217;re systematically assigning context-heavy work to people who lack context. This works with strong ownership models where automation routes PRs to the right domain owner, but fails with truly random assignment.</p><p>Mandatory approvals from two reviewers adds thoroughness but also coordination overhead. It can create a tragedy of the commons where everyone assumes someone else will do the deep review. This works well for critical paths like security or infrastructure changes, but becomes counterproductive when applied universally.</p><p>The pattern is clear: these practices fail when applied as process band-aids over systemic dysfunction. They succeed when they&#8217;re part of a broader system that includes clear ownership, shared context, and good architecture.</p><h2><strong>Code Review as a Diagnostic Tool</strong></h2><p>Here&#8217;s the uncomfortable truth: code review is a stress test for your engineering system. Every slow PR is a signal about deeper problems.</p><p>When PRs lack context, it reveals you don&#8217;t have a shared understanding of what you&#8217;re building or why. The product requirements were vague, the technical approach was never discussed, and now the reviewer is doing archaeology just to understand intent.</p><p>When PRs are too large to review, it reveals your architecture doesn&#8217;t support incremental delivery. You can&#8217;t ship smaller chunks because everything is coupled. The technical debt you&#8217;ve been deferring is now review debt.</p><p>When no one feels qualified to review, it reveals knowledge silos and single points of failure. You&#8217;ve got one person who understands authentication, another who owns the data pipeline, and they&#8217;re both overloaded. The review queue is just where this becomes visible.</p><p>When reviews turn into architecture debates, it reveals you don&#8217;t have clear technical direction. Every PR becomes an opportunity to renegotiate decisions that should have been made months ago, especially when the same patterns get questioned repeatedly.</p><p>When reviews focus on style over substance, it reveals that reviewers don&#8217;t have enough context to evaluate the actual logic, so they retreat to what they can evaluate: formatting, naming conventions, and other surface-level concerns that a linter should handle.</p><div class="pullquote"><p>Code review isn&#8217;t slow because developers don&#8217;t care. It&#8217;s slow because it&#8217;s the moment where all the shortcuts you took earlier come due.</p></div><h2><strong>The Ownership Gap</strong></h2><p>Teams with clear ownership have faster code reviews. Not because they&#8217;ve optimized the review process, but because ownership solves the underlying problems.</p><p>When a developer owns a service or domain, they write code differently. They document decisions, maintain architectural consistency, and respond to reviews quickly because it&#8217;s their responsibility, not a favor. They review incoming changes thoroughly because they&#8217;ll be maintaining this code.</p><p>But ownership is tricky. It&#8217;s not about assigning people to components on a chart. It&#8217;s about giving developers the autonomy to make decisions, the responsibility to maintain their domain, and the accountability for outcomes. That&#8217;s a cultural shift, not a process change.</p><p>Netflix&#8217;s &#8220;<em><a href="https://jobs.netflix.com/culture">freedom and responsibility</a></em>&#8221; works because ownership is real. Developers ship changes to production with minimal gates because they own the consequences. Code review becomes about knowledge sharing and catching genuine issues, not permission seeking.</p><p>Contrast that with environments where ownership is performative. Someone is &#8220;responsible&#8221; for a service but needs approval from architecture review boards, security teams, and product managers to make any meaningful change. The code review becomes another checkpoint in a gauntlet of approvals. Of course it&#8217;s slow.</p><h2><strong>The Context Problem</strong></h2><p>Let me tell you about the most frustrating code review I ever had to do. A PR touched a critical system component. The code looked fine&#8212;clean implementation, good tests. But I had no idea if the approach was right because I didn&#8217;t know what problem we were solving.</p><p>I found the related ticket with a brief description and no context. Nothing about why this mattered, what constraints we were working under, or what trade-offs had been considered. I pinged the author on Teams. They pointed me to a Confluence page from a meeting I wasn&#8217;t in that assumed knowledge of a discussion I&#8217;d never heard about.</p><p>I spent two hours reconstructing context just to give a meaningful review. The author had spent 30 minutes writing the code.</p><p>This is the context tax, and it&#8217;s killing your review velocity. Every reviewer who lacks context either spends hours on archaeology or does the worse thing: approves without understanding.</p><p>The solutions that work aren&#8217;t about the review process. They&#8217;re about information architecture. Link PRs to rich context, not just ticket numbers. Write PR descriptions that explain the problem, the approach, and the trade-offs. Assume the reviewer knows nothing.</p><p>Discuss approaches before implementation. A 15-minute conversation before coding can eliminate hours of review back-and-forth. This is what design docs and RFCs are for, but they only work if they&#8217;re lightweight and living documents, not bureaucratic artifacts.</p><p>Record decisions where the code lives. Architecture Decision Records aren&#8217;t overhead if they prevent you from having the same debate six times across different PRs. Make tribal knowledge discoverable. If understanding this PR requires knowing about the customer escalation from Q3 2024, write that down somewhere that isn&#8217;t someone&#8217;s head.</p><h2><strong>The Interrupt-Driven Review Problem</strong></h2><p>Here&#8217;s a pattern I see often: developers treat code review as an interrupt-driven task. A notification comes in, you context-switch from your work, skim the PR for 5 minutes, leave a comment, then try to get back to what you were doing. Your flow state is destroyed, and the review is mediocre because you never built a mental model of what the code is doing.</p><p>This is the worst of both worlds. The reviewer loses productivity to context-switching. The author gets superficial feedback that misses real issues but catches nitpicks. Then in production, you discover the actual problem that no one caught because reviews were done in interrupt-driven bursts.</p><p>The alternative isn&#8217;t to batch reviews once a week&#8212;that&#8217;s too slow. It&#8217;s to treat code review as real work that deserves dedicated time and focus.</p><p>Some teams have &#8220;<em>review hours</em>&#8221; where the expectation is that you&#8217;re doing reviews, not writing code. No meetings scheduled, chat on pause, full attention on understanding what someone built. The reviews are higher quality, they happen faster, and developers can actually schedule around them.</p><p>Others use pair or mob programming to eliminate the review bottleneck entirely. The review happens continuously as you&#8217;re writing the code. This works brilliantly for some teams and feels painfully slow for others. The point isn&#8217;t that everyone should pair program. The point is that you need to match your review process to how your team actually works.</p><h2><strong>When Reviews Become Performance Theater</strong></h2><p>Let&#8217;s talk about what happens when code review becomes about looking busy rather than being useful.</p><p>You&#8217;ve probably seen PRs with dozens of comments about formatting, naming, and other minutiae, while no one questions whether the approach is sound. Reviews that take days but offer no substantive feedback. Approvals given without actually reading the code because everyone knows the author will clean it up anyway.</p><p>This happens when review metrics become targets. When &#8220;<em>number of comments</em>&#8221; or &#8220;<em>time to approval</em>&#8221; get tracked, people game them. You get what you measure, and you&#8217;ve measured the wrong things.</p><p>The better question is: are we catching issues before production? Are developers learning from reviews? Is code quality improving over time? Those are harder to measure, so teams fall back to vanity metrics that drive counterproductive behavior.</p><p>I&#8217;ve also seen the opposite problem: reviews that become nitpick festivals. Every PR gets 30 comments about variable names and formatting. The author feels demoralized. The reviewer feels righteous. The code barely improves. Everyone is frustrated.</p><p>This usually happens when developers don&#8217;t feel they have more meaningful ways to contribute. If you can&#8217;t influence architecture or push back on product decisions, you can at least enforce the style guide. It&#8217;s busywork disguised as quality assurance.</p><p>The fix isn&#8217;t better tools or stricter guidelines. It&#8217;s giving developers real agency over their work and clear standards for what actually matters in a review.</p><h2><strong>The Async Communication Challenge</strong></h2><p>Code review is fundamentally async communication, and it exposes a truth most teams don&#8217;t want to face: we want the benefits of async&#8212;flexibility, time to think, no meeting overhead&#8212;without paying the upfront cost of making async work well.</p><p>Author submits a PR. Reviewer comments the next day. Author responds, but now the reviewer is in meetings all afternoon. They respond the following morning. Author makes changes, but now the reviewer is working on something else and doesn&#8217;t see the update for another day. What should have been a 30-minute conversation stretched over four days with maybe two hours of actual work.</p><p>Compare this to a synchronous code review where author and reviewer jump on a call, walk through the changes in 15 minutes, make decisions in real-time, and finish. This is faster, but it doesn&#8217;t scale. You can&#8217;t interrupt people for synchronous reviews every time.</p><p>The teams that do async reviews well have figured out how to front-load context and reduce round-trips. They write detailed PR descriptions. They proactively address obvious questions. They use inline comments to explain non-obvious decisions. They treat the PR description like a mini design doc.</p><p>They&#8217;ve also figured out when to go synchronous. If a PR has more than two rounds of back-and-forth in comments, jump on a call. That&#8217;s the signal that async isn&#8217;t working for this particular review.</p><h2><strong>A Framework for Actually Fixing This</strong></h2><p>Here&#8217;s how to diagnose whether your code review problems are process issues or symptoms of deeper dysfunction.</p>
      <p>
          <a href="https://blog.pragmaticdx.com/p/code-reviews-are-slow-because-everything">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Why “We’ll Fix It Later” Becomes “We’ll Never Fix It”]]></title><description><![CDATA[Technical debt isn't a coding problem, it's a culture problem. An exploration of why 'fix it later' becomes 'never,' and what teams who escape this trap do differently.]]></description><link>https://blog.pragmaticdx.com/p/why-well-fix-it-later-becomes-well</link><guid isPermaLink="false">https://blog.pragmaticdx.com/p/why-well-fix-it-later-becomes-well</guid><dc:creator><![CDATA[Marcel Hauri]]></dc:creator><pubDate>Tue, 07 Oct 2025 10:02:51 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/610af0d0-4c6d-40e4-a46c-b88d54c24c5d_1536x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Think about your last credit card statement. You see the balance, the interest rate, the minimum payment. Everything is visible and quantified. Technical debt doesn&#8217;t work like that.</p><p>That hardcoded configuration value sitting in your codebase? It&#8217;s not sending monthly reminders about how much time it costs every deployment. The integration you copied and pasted into three different services? It&#8217;s not calculating compound interest on the bugs you&#8217;ll need to fix in triplicate when something changes.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.pragmaticdx.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Pragmatic Developer Experience is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Here&#8217;s a sobering statistic: research shows that on average, <strong><a href="https://www.sciencedirect.com/science/article/abs/pii/S0164121221002119">25% of development effort gets spent dealing with technical debt issues</a></strong>. That&#8217;s one out of every four hours your team works. Not building features. Not fixing customer-reported bugs. Just paying interest on past decisions.</p><p>When you take on technical debt intentionally (shipping a feature quickly to test market demand, for instance), you&#8217;re making a calculated trade. The problem is most technical debt isn&#8217;t intentional. It emerges from the gap between what you knew when you wrote the code and what you know now.</p><h2>The Four Types of Technical Debt</h2><p>Martin Fowler created a useful framework for thinking about this, called the <a href="https://martinfowler.com/bliki/TechnicalDebtQuadrant.html">Technical Debt Quadrant</a>. It categorizes debt along two dimensions: whether it&#8217;s deliberate or inadvertent, and whether it&#8217;s reckless or prudent.<br></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wVkD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b6a5406-0c68-4f8c-a790-75102f6b3f33_512x384.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wVkD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b6a5406-0c68-4f8c-a790-75102f6b3f33_512x384.png 424w, https://substackcdn.com/image/fetch/$s_!wVkD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b6a5406-0c68-4f8c-a790-75102f6b3f33_512x384.png 848w, https://substackcdn.com/image/fetch/$s_!wVkD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b6a5406-0c68-4f8c-a790-75102f6b3f33_512x384.png 1272w, https://substackcdn.com/image/fetch/$s_!wVkD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b6a5406-0c68-4f8c-a790-75102f6b3f33_512x384.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wVkD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b6a5406-0c68-4f8c-a790-75102f6b3f33_512x384.png" width="512" height="384" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b6a5406-0c68-4f8c-a790-75102f6b3f33_512x384.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:384,&quot;width&quot;:512,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:46955,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.pragmaticdx.com/i/175199925?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b6a5406-0c68-4f8c-a790-75102f6b3f33_512x384.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wVkD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b6a5406-0c68-4f8c-a790-75102f6b3f33_512x384.png 424w, https://substackcdn.com/image/fetch/$s_!wVkD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b6a5406-0c68-4f8c-a790-75102f6b3f33_512x384.png 848w, https://substackcdn.com/image/fetch/$s_!wVkD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b6a5406-0c68-4f8c-a790-75102f6b3f33_512x384.png 1272w, https://substackcdn.com/image/fetch/$s_!wVkD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b6a5406-0c68-4f8c-a790-75102f6b3f33_512x384.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Technical Debt Quadrant - by Martin Fowler</figcaption></figure></div><p><strong>Reckless and deliberate</strong> debt happens when teams knowingly cut corners without considering consequences. &#8220;<em>We don&#8217;t have time for design</em>&#8221; shipped under deadline pressure with no plan to fix it.</p><p><strong>Reckless and inadvertent</strong> debt comes from lack of knowledge. Junior teams building systems without understanding design patterns, creating problems they don&#8217;t even recognize as problems.</p><p><strong>Prudent and inadvertent</strong> debt is what you learn by doing. You made the best decision you could with the information available, and only now (after building it) do you understand what you should have done differently.</p><p><strong>Prudent and deliberate</strong> debt is the strategic kind. You know you&#8217;re taking shortcuts, you&#8217;ve weighed the tradeoffs, and you have a plan to address it. &#8220;<em>We need to ship this MVP to validate the market, then we&#8217;ll refactor once we have funding</em>&#8221;.</p><p>The issue is that most organizations treat all debt the same way, when these different types require completely different management approaches.</p><p>Consider this scenario: A junior engineer inherits a codebase with a cryptic comment: &#8220;<em>DO NOT MODIFY - talks to legacy billing system</em>&#8221;. Nobody remembers why. The person who wrote it left two years ago. The billing system was replaced eighteen months ago. But the warning remains, and with it, all the architectural decisions built around a constraint that vanished.</p><p>The debt compounds silently. Unlike financial debt, nobody&#8217;s tracking the balance until something breaks.</p><h2>How Organizations Are Designed to Create Debt</h2><p>Let me show you how this plays out. An engineer proposes refactoring a critical but fragile system. The work will take three weeks and deliver zero new features. Meanwhile, the roadmap has six features stakeholders are waiting for.</p><p>What happens next? The engineer who ships those six features gets recognized in the all-hands meeting. The engineer who prevented future incidents through careful refactoring gets nothing. You can&#8217;t celebrate disasters that didn&#8217;t happen.</p><p>Now extend this pattern across quarterly reviews, promotion cycles, and annual planning. What behavior gets rewarded? Shipping visible things. What behavior gets ignored or penalized? Investing time in invisible improvements.</p><p><a href="https://www.sciencedirect.com/science/article/abs/pii/S0164121221002119">Research confirms this intuition</a>. Studies of technical debt across organizations consistently identify deadline pressure as the single most cited cause. The top three effects? Delivery delays, low maintainability, and constant rework. The very outcomes teams are trying to avoid by taking shortcuts are the outcomes they guarantee by taking them.</p><p>This raises an interesting point about job mobility. In a typical two to three year tenure at a company, you can easily ship code in year one and leave before the consequences arrive in year three. You&#8217;re incentivized to optimize for the demo, the launch, the promotion case. Not for the developer who&#8217;ll curse your name in 2027.</p><p>That&#8217;s not malice. It&#8217;s rational response to incentives.</p><h2>Why Teams Can&#8217;t Talk About Technical Debt</h2><p>Pay attention to how your team talks about technical debt. The language reveals deeper cultural problems.</p><p>In many organizations, acknowledging debt feels like confessing failure. &#8220;<em>We need to fix this mess</em>&#8221; carries an implicit &#8220;<em>we screwed up</em>&#8221;. This framing makes honest conversation impossible. The most damaging teams treat technical debt as a moral failing. Code is either &#8220;<em>good</em>&#8221; or &#8220;<em>bad</em>.&#8221; Developers who write imperfect code are careless or incompetent.</p><p>This creates a culture where everyone pretends their code is perfect, debt accumulates in silence, and nobody asks for time to fix anything because asking feels like admitting you&#8217;re not good enough.</p><p>Here&#8217;s what healthier teams recognize: <strong>all code is written under constraints. Time, knowledge, requirements, available tools. The code you wrote last year reflects what you knew last year. It&#8217;s not bad code. It&#8217;s old code.</strong></p><p>That difference is profound. One framing breeds shame and hiding. The other breeds learning and improvement.</p><p>But even well-intentioned teams fall into linguistic traps. Calling it &#8220;<em>technical debt</em>&#8221; at all implies it&#8217;s the engineering team&#8217;s problem. In reality, it&#8217;s an organizational problem. Product managers who insist on impossible timelines create debt. Executives who refuse to invest in infrastructure create debt. Sales teams who promise custom features create debt. Yet the word &#8220;<em>technical</em>&#8221; assigns ownership to the people with the least power to prevent it.</p><h2>The Hidden Hierarchy of Software Work</h2><p>Let me describe a pattern you&#8217;ve probably seen. In most organizations, there&#8217;s a clear status hierarchy:</p><ul><li><p>Building new products: prestigious</p></li><li><p>Adding features: valuable</p></li><li><p>Maintaining existing systems: thankless</p></li><li><p>Fixing old code: what you do when you&#8217;re not trusted with important work</p></li></ul><p>This hierarchy pervades everything. Job postings celebrate &#8220;<em>building from scratch</em>&#8221;. Interview questions focus on greenfield design. Promotion packets highlight new systems launched. Meanwhile, the engineer who spent six months making the payment system reliable enough that nobody thinks about it? They get passed over because their work is invisible.</p><h3>Technical Debt Lives Everywhere, Not Just in Code</h3><p>Here&#8217;s another dimension to this: technical debt doesn&#8217;t just live in code. Research has identified at least ten distinct types of debt that accumulate across a system:</p><p><strong>Design debt</strong> refers to architectural shortcuts and structural compromises. That microservice you made a monolith &#8220;<em>just for now</em>&#8221; because breaking it apart was too complex.</p><p><strong>Code debt</strong> is the classic type&#8212;duplicated logic, tight coupling, complex functions that nobody wants to touch. The stuff that makes developers groan during code review.</p><p><strong>Test debt</strong> accumulates when you skip writing tests or let your test suite become outdated and brittle. Every deploy becomes a gamble.</p><p><strong>Documentation debt</strong> happens when your docs lag behind reality. The onboarding guide that references systems you deprecated six months ago. The API documentation that describes endpoints that don&#8217;t exist.</p><p><strong>Infrastructure debt</strong> builds up when your deployment pipeline, monitoring, or server architecture can&#8217;t scale with your business. You&#8217;re still manually deploying because automating it keeps getting postponed.</p><p><strong>Requirements debt</strong> emerges from partially implemented features or requirements that work for some cases but not others. The edge cases you said you&#8217;d handle &#8220;<em>later</em>.&#8221;</p><p><strong>Architecture debt</strong> (distinct from design debt) involves fundamental technology choices that are now outdated or unsuitable. You&#8217;re stuck on an old framework version because upgrading would break everything.</p><p>The problem is that teams often only recognize and track code debt, treating it as the only &#8220;<em>real</em>&#8221; technical debt. Meanwhile, documentation debt makes onboarding take weeks instead of days. Test debt makes every release nerve-wracking. Infrastructure debt means your deploy process takes hours instead of minutes.</p><p>All of this debt is interconnected. Poor documentation makes it harder to refactor code. Lack of tests makes architectural changes risky. Infrastructure limitations prevent you from adopting better development practices.</p><h2>Why Rewrites Almost Always Fail</h2><p>When debt becomes sufficiently painful, teams reach for the most seductive solution: rewrite everything from scratch.</p><p>&#8220;<em>This time</em>&#8221;, they tell themselves, &#8220;<em>we&#8217;ll do it right. We&#8217;ve learned from our mistakes. We understand the requirements now. The new version will be clean, fast, and maintainable</em>&#8221;.</p><p>The rewrite almost never delivers on this promise. <a href="https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/">Joel Spolsky</a> called rewriting from scratch &#8220;<em>the single worst strategic mistake that any software company can make</em>&#8221;, and yet teams make it constantly.</p><p>Want a historical example? The Year 2000 problem, where thousands of systems stored dates as two digits to save memory. What started as a reasonable optimization in the 1960s and 1970s turned into a crisis that cost an <a href="https://en.wikipedia.org/wiki/Year_2000_problem#Cost">estimated $300 billion to fix</a> as the year 2000 approached. That&#8217;s technical debt at civilization scale, accumulating over decades because &#8220;<em>we&#8217;ll fix it later</em>&#8221; never happened until it became an emergency.</p><p>Why do rewrites fail? Because they let you avoid confronting the hard organizational problems that created the debt in the first place. You don&#8217;t have to negotiate for refactoring time. You&#8217;ve declared the old system dead. You don&#8217;t have to gradually improve code while keeping features working. You&#8217;re building fresh. You don&#8217;t have to convince stakeholders that maintenance matters. You&#8217;ve framed it as innovation.</p><p>But here&#8217;s what doesn&#8217;t change: the organizational dynamics. The pressure to ship features quickly remains. The incentives rewarding visible work over sustainable work remain. The lack of time for documentation, testing, and thoughtful design remains.</p><p>So the rewrite proceeds under the same constraints that created the original mess. Worse, you&#8217;re now maintaining two systems: the old one that customers rely on and the new one that&#8217;s not ready yet.</p><p>Three years later, the &#8220;<em>new</em>&#8221; system is the legacy system. It has its own accumulated debt. And someone is proposing another rewrite.</p><h2>What Teams Who Break the Cycle Actually Do</h2><p>Some teams break the cycle. They&#8217;re the ones where &#8220;<em>we&#8217;ll fix it later</em>&#8221; sometimes actually means later. What do they do differently?</p><h3>They Make Maintenance Continuous, Not Special</h3><p>There are no &#8220;<em>tech debt sprints</em>&#8221; where refactoring gets quarantined away from real work. Instead, every project includes time for improvement. When you touch a part of the codebase to add a feature, you leave it slightly better than you found it. The boy scout rule isn&#8217;t a nice idea. It&#8217;s enforced practice.</p><p>This requires fighting the instinct to separate &#8220;<em>feature work</em>&#8221; from &#8220;<em>cleanup work</em>&#8221;. That separation seems efficient (focus on one thing at a time), but it ensures cleanup never happens. When maintenance is someone else&#8217;s job, it becomes nobody&#8217;s job. When it&#8217;s everyone&#8217;s responsibility as part of their normal work, it gets done.</p><h3>They Build Feedback Loops That Create Accountability</h3><p>If deployments are slow and error-prone, the team that writes code also handles deployments. If the system is hard to debug, the team that builds features also carries the pager. If technical decisions have consequences, the people making those decisions feel them directly.</p><p>This is why the DevOps movement matters beyond just eliminating silos. When developers operate what they build, they can&#8217;t externalize the cost of their technical decisions. The quick hack that&#8217;s annoying to deploy becomes their own problem. The missing observability that makes debugging hard wakes them up at 3am.</p><p>Suddenly, investing in quality becomes rational self-interest.</p><h3>They Develop a Shared Language for Trade-offs</h3><p>Successful teams don&#8217;t pretend that all technical debt is bad or that all maintenance is urgent. They get specific:</p><ul><li><p>What breaks if we don&#8217;t fix this?</p></li><li><p>What becomes easier if we do?</p></li><li><p>What&#8217;s the time horizon on these consequences?</p></li></ul><p>This specificity enables honest conversations with stakeholders. Instead of &#8220;<em>we need tech debt time</em>&#8221; you say &#8220;<em>this system costs us two engineer-weeks per month in workarounds, and we can fix it in six weeks</em>&#8221;.</p><p>That&#8217;s a trade-off product managers can evaluate. It treats engineering concerns as business concerns, because they are.</p><h2>The Economics of Sustainability</h2><p>Let me show you the math that changed how I think about this. The case for maintenance isn&#8217;t moral. It&#8217;s economic.</p><p>Organizations that chronically under-invest in code quality move slower over time, not faster. The quick hacks pile up. The workarounds compound. Eventually, every change requires touching six different systems, coordinating with four teams, and testing a dozen edge cases born from years of patches.</p><p>At some point (typically around year three or four of consistent neglect), teams hit a wall. Velocity plummets. Estimates balloon. Simple features take months. Engineers burn out from the constant firefighting. The backlog fills with bugs that can&#8217;t be fixed without refactoring, which there&#8217;s no time for, because you&#8217;re too busy fixing bugs.</p><p>This is when executives finally authorize &#8220;<em>paying down technical debt</em>&#8221; usually by pulling engineers off features for a quarter. It&#8217;s too little, too late. The debt wasn&#8217;t created in a quarter. It accumulated over years. A three-month sprint barely makes a dent.</p><p>Meanwhile, the feature roadmap stalls, pressure builds, and the moment the quarter ends, everyone races back to shipping features. Six months later, you&#8217;re back where you started, except with more debt.</p><p>The math is straightforward, but it requires believing in a time horizon longer than the next quarter. That&#8217;s the real barrier.</p><h2>What You Can Do (Starting Tomorrow)</h2><p>Here&#8217;s how to start shifting your team&#8217;s maintenance culture, even if you&#8217;re not in a leadership position:</p>
      <p>
          <a href="https://blog.pragmaticdx.com/p/why-well-fix-it-later-becomes-well">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[The Hidden Cost of Bad Onboarding]]></title><description><![CDATA[Your new hire's first week will shape the next year of their productivity. Most companies treat onboarding as an HR checklist when it's actually one of the highest-leverage investments in developer ex]]></description><link>https://blog.pragmaticdx.com/p/the-hidden-cost-of-bad-onboarding</link><guid isPermaLink="false">https://blog.pragmaticdx.com/p/the-hidden-cost-of-bad-onboarding</guid><dc:creator><![CDATA[Marcel Hauri]]></dc:creator><pubDate>Thu, 02 Oct 2025 10:04:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3921ea22-eaf0-4b52-af3a-2813f389b751_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Your new senior engineer just started Monday. Brilliant resume, great interviews, exactly the kind of person you need. By Friday afternoon, they&#8217;ve made zero meaningful contributions and are seriously questioning whether they made the right choice accepting your offer.</p><p><strong>What happened?</strong></p><p>They spent Monday fighting with laptop setup. Tuesday trying to figure out which Slack or Teams channels actually matter. Wednesday in meetings where everyone used acronyms nobody explained. Thursday attempting to understand a codebase with outdated documentation. Friday realizing they still don&#8217;t have the right database permissions to run anything locally.</p><p>This isn&#8217;t just a bad first week. It&#8217;s a pattern that will define their entire tenure at your company. The frustration compounds. The confusion persists. The initial excitement dies.</p><h2>Why First Impressions Compound</h2><p>Here&#8217;s what most organizations miss: onboarding isn&#8217;t just about getting someone productive quickly. It&#8217;s about establishing the mental models, work patterns, and confidence that will determine their effectiveness for months or years to come.</p><p>Think about learning to play an instrument. If your first week involves fighting with a broken guitar and unclear instructions, you&#8217;re not just delayed by a week. You&#8217;ve learned that this activity is frustrating and confusing. That belief shapes how you approach every future practice session.</p><p>The same thing happens with developer onboarding. A new hire who spends their first two weeks hunting for information, asking basic questions that nobody can answer clearly, and hitting mysterious blockers learns something fundamental about your organization: this place is chaotic, and nobody really knows how things work.</p><p>That lesson is hard to unlearn.</p><p>Developers who go through this initiation sometimes become strong contributors, but many do not. Many leave within a few months, citing &#8220;<em>poor organization</em>&#8221; or &#8220;<em>lack of clarity</em>&#8221;. What they really mean is that their first impression is chaos, and nothing afterward changes their view.</p><h2>The Overlooked Metric of Time to First Commit</h2><p>Most companies obsess over hiring metrics. Time to fill a role. Cost per hire. Offer acceptance rates. But almost nobody systematically tracks time to first meaningful commit.</p><p>This is strange because it&#8217;s one of the most revealing metrics you can measure. It captures:</p><ul><li><p>How well your documentation reflects reality</p></li><li><p>Whether your development environment actually works</p></li><li><p>If your architecture is comprehensible to smart people who didn&#8217;t build it</p></li><li><p>Whether your team has time to help new hires or is too swamped</p></li><li><p>How many approval gates block simple changes</p></li></ul><p>A team where new engineers ship something real in their first week is fundamentally different from one where it takes a month. And I&#8217;m not talking about trivial changes. I mean actual contributions that required understanding the codebase, making decisions, and getting code reviewed and deployed.</p><p>The companies that track this metric take it seriously.</p><h2>The Documentation Dead Zone</h2><p>Every company has documentation. Most of it is useless for onboarding.</p><p>The problem isn&#8217;t that documentation doesn&#8217;t exist. The problem is that it was written by people who already understood the system, for people who already understand the system. New hires exist in a dead zone where they don&#8217;t know enough to even formulate the right questions.</p><p>Consider what actually happens when a new developer tries to set up a development environment. They find a README that says &#8220;<em>install dependencies with npm install</em>&#8221; but doesn&#8217;t mention you need Node 18 specifically, not 20. Or that you need these three environment variables that aren&#8217;t in the example file. Or that the database migrations only work if you run them in a specific order.</p><p>Each of these is a small thing. Easy to fix once you know about it. But when you&#8217;re new, you don&#8217;t know whether you&#8217;re hitting a known issue with a quick fix or a fundamental problem that requires escalation. So you waste an hour trying different things, then sheepishly message someone to ask for help with what turns out to be a one-line config change.</p><p>Now multiply that by dozens of similar paper cuts throughout the first week.</p><p>The real insight is that documentation gaps don&#8217;t just waste time. They train new hires that the documentation can&#8217;t be trusted. So even when good documentation exists, they learn to ask a person instead of checking the docs first. This creates a vicious cycle where the team spends more time answering questions, has less time to update documentation, and the gap widens.</p><h2>Cognitive Load and the Overwhelm Point</h2><p>There&#8217;s a point in every onboarding process where a new hire goes from engaged learning to overwhelmed survival mode. Once you cross that threshold, learning efficiency drops dramatically.</p><p>Think about what a new developer faces in their first week:</p><ul><li><p>New codebase architecture</p></li><li><p>New deployment processes</p></li><li><p>New team communication norms</p></li><li><p>New business domain knowledge</p></li><li><p>New tools and systems</p></li><li><p>New organizational structure and politics</p></li></ul><p>Any one of these is manageable. All of them simultaneously? That&#8217;s when people start forgetting things they learned two days ago because their mental buffer is completely full.</p><p>The best onboarding programs recognize this and sequence learning deliberately. First week: get something running locally and understand one small part of the system deeply. Second week: expand to adjacent systems and make a meaningful change. Third week: start participating in planning and architectural discussions.</p><p>But most companies just throw everything at new hires at once and wonder why they seem overwhelmed.</p><h2>What Actually Works</h2><p>The companies that do onboarding well share some common patterns.</p><p><strong>They assign a dedicated onboarding buddy.</strong> Not just someone who&#8217;s &#8220;<em>available for questions</em>&#8221; but a person whose job for the first two weeks includes checking in daily, proactively identifying blockers, and providing context that isn&#8217;t written down anywhere.</p><p>This isn&#8217;t mentorship exactly. It&#8217;s more like having a native guide when you&#8217;re visiting a foreign country. Someone who can say &#8220;<em>oh, when the documentation says X, what they actually mean is Y</em>&#8221; or &#8220;<em>yeah, that error message is confusing, here&#8217;s what&#8217;s really happening</em>&#8221;.</p><p><strong>They have a concrete first project scoped specifically for learning.</strong> Something real enough to be meaningful but small enough to complete in a few days. The goal isn&#8217;t just to ship something, it&#8217;s to touch enough of the system to build a mental model.</p><p>One team I worked with had new hires implement a small feature in a deliberately well-architected part of the codebase. Not because that feature was urgently needed, but because completing it required understanding database schemas, API contracts, testing patterns, and deployment processes. By the end of their first week, they&#8217;d seen the full development lifecycle.</p><p><strong>They keep a living document of &#8220;things new hires always ask.&#8221;</strong> Every time a new person asks a question that isn&#8217;t clearly documented, someone updates the onboarding guide. This creates a feedback loop where onboarding continuously improves.</p><p><strong>They measure and iterate.</strong> They ask new hires what was confusing, what took longer than it should have, what surprised them. Not in a formal survey six months later, but in casual check-ins during week one, week two, week four.</p><p>The best approach is a simple weekly form during the first month that asks, &#8220;<em>What was your biggest blocker this week?</em>&#8221; and &#8220;<em>What would have helped you be more effective?</em>&#8221; The answers show exactly where onboarding fails.</p><h2>The Organizational Signal</h2><p>Your onboarding process sends a signal about how your organization operates. A chaotic, frustrating onboarding experience suggests chaos and frustration ahead. A smooth, thoughtful one suggests that someone cares about systems and processes.</p><p>This matters more than most leaders realize. Developers are constantly evaluating whether they made the right choice joining your company. The first few weeks are when they&#8217;re most open to that evaluation. Everything they experience gets interpreted as representative of the broader organization.</p><p>If setup is broken, they assume other systems are broken. If documentation is outdated, they assume communication is generally poor. If they can&#8217;t get questions answered quickly, they assume the team is overwhelmed or disorganized.</p><p>These assumptions might not be fair or accurate, but they&#8217;re natural. First impressions become the lens through which everything else gets interpreted.</p><p>The flip side is also true. A great onboarding experience creates goodwill and benefit of the doubt that carries you through later problems. When something goes wrong after a smooth onboarding, new hires think &#8220;<em>that&#8217;s unusual</em>&#8221; instead of &#8220;<em>here we go again</em>&#8221;.</p><h2>Starting From Where You Are</h2><p>You might be reading this thinking &#8220;<em>we&#8217;re already overwhelmed, we can&#8217;t rebuild our entire onboarding process</em>&#8221;. Fair enough. But you can start measuring time to first commit and asking new hires what blocked them.</p><p>Pick one thing to fix based on what you learn. Maybe it&#8217;s the development setup process. Maybe it&#8217;s documentation for a particularly confusing system. Maybe it&#8217;s creating clearer ownership so new hires know who to ask about what.</p><p>Each improvement compounds. Better documentation reduces questions. Fewer questions means more time to improve systems. Better systems mean easier onboarding. The cycle reinforces itself in a positive direction.</p><p>The key is treating onboarding as a product you&#8217;re building for your newest customers. Those customers happen to be your own employees, but the mindset is the same. Understand their needs, remove friction, iterate based on feedback.</p><h2>The Real Investment</h2><p>Improving onboarding requires time from your most experienced people. The ones who know the systems well enough to document them accurately. Who understand the subtle context that isn&#8217;t written down. Who can identify what&#8217;s actually important versus what&#8217;s just historical accident.</p><p>These are also the people who are busiest with &#8220;<em>real work</em>&#8221;. Spending a day improving onboarding documentation feels like a luxury when there&#8217;s a production issue or a tight deadline.</p><p>But here&#8217;s the math: if better onboarding saves each new hire 40 hours in their first month, and you hire 10 people this year, that&#8217;s 400 hours. Plus whatever time existing team members save by not answering the same questions repeatedly.</p><p>More importantly, those new hires will be more confident, more effective, and more likely to stay. That&#8217;s the real return on investment.</p><p><em><strong>Your next hire starts Monday. They&#8217;re excited, nervous, ready to prove themselves. What will their first week teach them about your organization?</strong></em></p><p>Will they learn that this is a place where things work, where documentation can be trusted, where getting help is easy and expected? Or will they learn that they&#8217;re on their own to figure things out, that systems are fragile, that asking questions makes them look incompetent?</p><p>The answer to that question might determine whether they&#8217;re still around this time next year.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.pragmaticdx.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">If this resonated with you, don&#8217;t miss future posts. Subscribe for free, or become a paid supporter of Pragmatic Developer Experience.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Why Developer Experience Is More Than Just Better Tooling]]></title><description><![CDATA[Better tools help, but they're not why your best developers stay or leave. Here's what actually moves the needle on productivity and satisfaction.]]></description><link>https://blog.pragmaticdx.com/p/why-developer-experience-is-more</link><guid isPermaLink="false">https://blog.pragmaticdx.com/p/why-developer-experience-is-more</guid><dc:creator><![CDATA[Marcel Hauri]]></dc:creator><pubDate>Tue, 30 Sep 2025 10:00:25 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4cb3fe5f-98df-4294-a112-c8b6abc7854a_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I often wonder why the developer experience conversation always seems to circle back to tools. You see it everywhere: blog posts about the best IDEs, conference talks about CI/CD pipelines, LinkedIn threads debating build systems. Don&#8217;t get me wrong. Tools matter. But here&#8217;s the thing: I&#8217;ve watched teams with cutting-edge toolchains struggle while others with decent-but-not-amazing tools absolutely thrive. That gap? It&#8217;s everything we&#8217;re not talking about.</p><h2>The Tooling Obsession</h2><p>Tools are easy to discuss because they&#8217;re concrete. You can benchmark them. GitHub Copilot saves X minutes per day. Switching from Jenkins to GitHub Actions reduces build time by Y percent. These numbers feel solid, measurable, actionable.</p><p>But here&#8217;s the paradox: <a href="https://www.gartner.com/en/software-engineering/topics/developer-experience">research from Gartner</a> shows that teams with high-quality developer experience are 33% more likely to attain their target business outcomes, yet organizations continue investing heavily in tooling without seeing corresponding improvements in satisfaction or retention. The interesting question here is: what are we missing?</p><h2>What Actually Happens When Developers Work</h2><p>Let me paint a more realistic picture. A developer doesn&#8217;t just open their IDE and start typing. They&#8217;re holding a mental model of how three microservices interact. They&#8217;re trying to remember why the caching layer was implemented that particular way. They&#8217;re context-switching between a vague product requirement, a Teams or Slack thread with seven people offering conflicting opinions, and that one piece of documentation that&#8217;s definitely out of date but nobody&#8217;s sure by how much.</p><p>Now, you could have the fastest build system in the world, but if a developer spends 30 minutes hunting down tribal knowledge or trying to understand why a decision was made six months ago, your tooling investment isn&#8217;t moving the needle. <a href="https://blog.pragmaticdx.com/p/the-problem-of-interruptions">As I wrote about interruptions and flow state</a>, real programming happens in long, uninterrupted blocks where you can hold an entire system in your head. But that&#8217;s becoming almost countercultural.</p><p>This raises an interesting point about cognitive load. It&#8217;s not just about the code you&#8217;re writing, it&#8217;s about everything you need to hold in your head to write it effectively. When developers are fighting just to understand how to do basic tasks, they&#8217;re spending cognitive cycles on plumbing instead of problem-solving.</p><p>Teams often treat their CI/CD pipeline as critical infrastructure while letting documentation rot and architectural decisions live only in people&#8217;s heads. Both are infrastructure, just different kinds.</p><h2>The Autonomy Problem</h2><p>Here&#8217;s something I&#8217;ve noticed: developers can be remarkably tolerant of clunky tools if they feel they have real agency over their work. But give them perfect tooling in an environment where every decision requires three approval meetings? That&#8217;s a recipe for quiet quitting.</p><p>The trade-off here is tricky. Organizations need some level of governance. You can&#8217;t have everyone making architectural decisions in isolation. But there&#8217;s a huge difference between &#8220;<em>we have clear principles and trust you to apply them</em>&#8221; and &#8220;<em>you need approval to upgrade a minor dependency version.</em>&#8221; Where that line sits depends on your organization&#8217;s risk tolerance, team maturity, and frankly, how much you actually trust your engineers.</p><p>Netflix&#8217;s culture emphasizes &#8220;<a href="https://jobs.netflix.com/culture">freedom and responsibility</a>&#8221; where engineers get information and freedom to make decisions, with managers practicing &#8220;context not control. But this works because they&#8217;ve invested heavily in <a href="https://noise.getoto.net/2025/09/19/empowering-netflix-engineers-with-incident-management/">incident management processes</a> and monitoring capabilities, plus chaos engineering practices that continuously test system resilience. That&#8217;s not a contradiction. It&#8217;s a system designed around trust with guardrails, not permission gates.</p><h2>The Communication Maze</h2><p>This leads to another consideration I find fascinating: how much of developer experience is actually about information flow? Think about your last frustrating workday. How much of it was fighting your IDE versus fighting to understand requirements, waiting for answers in Slack or Teams, or sitting in meetings that could&#8217;ve been async updates?</p><p><a href="https://slab.com/blog/stripe-writing-culture/">Stripe was cited</a> extensively about their &#8220;<em>writing culture&#8221;</em>. Comprehensive RFCs, detailed documentation, thoughtful async communication. That&#8217;s developer experience work, even though it has nothing to do with their build system. The interesting thing is that this kind of cultural infrastructure is harder to build than technical infrastructure, but it often has more impact.</p><p>But wait, there&#8217;s a counterargument here. Some teams thrive on synchronous collaboration. Look at pair programming or mob programming advocates. They&#8217;d argue that real-time interaction is essential. So which is right? Honestly, it probably depends on your team composition, the type of work you&#8217;re doing, and individual preferences. The gray area here matters more than we admit.</p><h2>The Invisible Stuff That Actually Matters</h2><p>Here&#8217;s where it gets subtle. Can a developer experiment with a new approach without someone questioning their velocity? Can they admit they&#8217;re stuck without it reflecting poorly in performance reviews? Can they push back on a feature request because the technical debt is getting unsustainable?</p><p>These questions determine whether your environment enables good work or slowly grinds people down. <a href="https://blog.pragmaticdx.com/p/why-ignoring-developer-frustration">The research I covered on developer frustration</a> found that flow state, feedback loops, and cognitive load are distinct factors that drive both productivity and retention. But notice what&#8217;s missing from that list: tooling specifications.</p><p>The challenge is that psychological safety, clear ownership, and reasonable work-life boundaries are harder to measure than build times. You can&#8217;t put them in a dashboard. That makes them easy to deprioritize, even though they&#8217;re often the actual differentiators between companies that retain talent and those that don&#8217;t.</p><h2>What About Learning?</h2><p>I often wonder whether we&#8217;ve made the learning curve too shallow with modern tooling. Now, hear me out. I&#8217;m not advocating for hazing rituals or deliberately obtuse systems. But there&#8217;s something interesting happening when tools become so abstracted that developers don&#8217;t understand what&#8217;s actually happening underneath.</p><p>Take <a href="https://vercel.com/">Vercel</a> or <a href="https://www.netlify.com/">Netlify</a>. Amazing developer experiences for deploying web apps. But do developers understand what they&#8217;re abstracting away? Does it matter? Maybe it doesn&#8217;t for application developers, but it might for platform engineers. This raises the question: does good DX mean hiding complexity, or does it mean making complexity manageable and learnable?</p><p>The best developer experiences I&#8217;ve seen do both. They provide good defaults and abstractions, but they also make it possible to dig deeper when needed. <a href="https://blog.pragmaticdx.com/p/building-developer-first-apis">When I wrote about building developer-first APIs</a>, this balance comes up constantly. The goal isn&#8217;t just to make things work, it&#8217;s to make them understandable.</p><h2>A Framework for Moving Forward</h2><p>So what actually works? The evidence points to flow state, feedback loops, cognitive load, and platform quality as key factors. Here&#8217;s how to address them:</p>
      <p>
          <a href="https://blog.pragmaticdx.com/p/why-developer-experience-is-more">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Why AI Adoption Is Universal but Its Benefits Are Not]]></title><description><![CDATA[According to DORA, AI has become standard practice across software teams worldwide. But without organizational foundations like strong platforms and workflows, software delivery remains fragile.]]></description><link>https://blog.pragmaticdx.com/p/why-ai-adoption-is-universal-but</link><guid isPermaLink="false">https://blog.pragmaticdx.com/p/why-ai-adoption-is-universal-but</guid><dc:creator><![CDATA[Marcel Hauri]]></dc:creator><pubDate>Thu, 25 Sep 2025 12:00:30 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/510c4cf0-743d-4b64-9b30-04e9b702b114_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The <a href="https://dora.dev/research/2025/dora-report/">2025 DORA State of AI-assisted Software Development</a> report reveals a notable disconnect in AI adoption outcomes. While 95% of developers now use AI tools at work only 70% trust the code these tools generate. This gap points to implementation challenges that many organizations haven&#8217;t fully addressed.</p><p>The research, based on responses from nearly 5,000 technology professionals worldwide, shows that AI&#8217;s impact varies significantly depending on organizational context rather than tool sophistication alone.</p><p><strong>Why this matters:</strong> Understanding what determines AI success can help organizations make better investment decisions and avoid common implementation pitfalls.</p><h2>The Trust Gap in AI Usage</h2><p>The data shows widespread AI adoption alongside measured skepticism. While most developers report productivity improvements from AI tools, 30% express little to no trust in AI-generated output. This suggests a &#8220;trust but verify&#8221; approach that the researchers interpret as potentially healthy rather than problematic.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-jos!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1694efe-a3fe-4f03-8faf-cf1bfe219675_1436x982.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-jos!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1694efe-a3fe-4f03-8faf-cf1bfe219675_1436x982.png 424w, https://substackcdn.com/image/fetch/$s_!-jos!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1694efe-a3fe-4f03-8faf-cf1bfe219675_1436x982.png 848w, https://substackcdn.com/image/fetch/$s_!-jos!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1694efe-a3fe-4f03-8faf-cf1bfe219675_1436x982.png 1272w, https://substackcdn.com/image/fetch/$s_!-jos!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1694efe-a3fe-4f03-8faf-cf1bfe219675_1436x982.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-jos!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1694efe-a3fe-4f03-8faf-cf1bfe219675_1436x982.png" width="1436" height="982" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c1694efe-a3fe-4f03-8faf-cf1bfe219675_1436x982.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:982,&quot;width&quot;:1436,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:828208,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.pragmaticdx.com/i/174417674?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1694efe-a3fe-4f03-8faf-cf1bfe219675_1436x982.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-jos!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1694efe-a3fe-4f03-8faf-cf1bfe219675_1436x982.png 424w, https://substackcdn.com/image/fetch/$s_!-jos!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1694efe-a3fe-4f03-8faf-cf1bfe219675_1436x982.png 848w, https://substackcdn.com/image/fetch/$s_!-jos!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1694efe-a3fe-4f03-8faf-cf1bfe219675_1436x982.png 1272w, https://substackcdn.com/image/fetch/$s_!-jos!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1694efe-a3fe-4f03-8faf-cf1bfe219675_1436x982.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>During interviews, developers compared this skepticism to how they approach solutions found on Stack Overflow&#8212;useful resources that still require validation. The research indicates this cautious approach may be a sign of mature adoption rather than a barrier to it.</p><p>However, this trust gap does highlight an important consideration: organizations need to invest in training focused on critical evaluation and validation of AI-generated work, not just tool usage.</p><h2>Seven Team Performance Profiles</h2><p>The report identifies seven distinct team archetypes based on performance, stability, and well-being metrics:</p><ul><li><p><strong>Foundational challenges</strong> (10%): Multiple performance gaps</p></li><li><p><strong>Legacy bottleneck</strong> (11%): Reactive work due to unstable systems</p></li><li><p><strong>Constrained by process</strong> (17%): Stable systems, inefficient processes</p></li><li><p><strong>High impact, low cadence</strong> (7%): Strong results, high instability</p></li><li><p><strong>Stable and methodical</strong> (15%): Quality work at deliberate pace</p></li><li><p><strong>Pragmatic performers</strong> (20%): Strong delivery with average well-being</p></li><li><p><strong>Harmonious high-achievers</strong> (20%): High performance with low burnout</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!16Cq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b8b3523-538e-4a14-8e4f-d763065d4239_1438x1128.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!16Cq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b8b3523-538e-4a14-8e4f-d763065d4239_1438x1128.png 424w, https://substackcdn.com/image/fetch/$s_!16Cq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b8b3523-538e-4a14-8e4f-d763065d4239_1438x1128.png 848w, https://substackcdn.com/image/fetch/$s_!16Cq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b8b3523-538e-4a14-8e4f-d763065d4239_1438x1128.png 1272w, https://substackcdn.com/image/fetch/$s_!16Cq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b8b3523-538e-4a14-8e4f-d763065d4239_1438x1128.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!16Cq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b8b3523-538e-4a14-8e4f-d763065d4239_1438x1128.png" width="1438" height="1128" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9b8b3523-538e-4a14-8e4f-d763065d4239_1438x1128.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1128,&quot;width&quot;:1438,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1218499,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.pragmaticdx.com/i/174417674?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b8b3523-538e-4a14-8e4f-d763065d4239_1438x1128.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!16Cq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b8b3523-538e-4a14-8e4f-d763065d4239_1438x1128.png 424w, https://substackcdn.com/image/fetch/$s_!16Cq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b8b3523-538e-4a14-8e4f-d763065d4239_1438x1128.png 848w, https://substackcdn.com/image/fetch/$s_!16Cq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b8b3523-538e-4a14-8e4f-d763065d4239_1438x1128.png 1272w, https://substackcdn.com/image/fetch/$s_!16Cq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b8b3523-538e-4a14-8e4f-d763065d4239_1438x1128.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p><em>Our analysis revealed seven distinct team archetypes, ranging from those excelling in healthy, sustainable environments (Harmonious high-achievers) to those trapped by technical debt (Legacy bottleneck) or inefficient processes (Constrained by process).</em></p></blockquote><p>This framework suggests that traditional software delivery metrics alone don&#8217;t capture the full picture of team health and effectiveness.</p><h2>The Platform Foundation Effect</h2><p>One of the clearer findings involves platform engineering. With 90% adoption across organizations, the research shows a strong correlation between platform quality and AI effectiveness. Organizations with higher-quality internal platforms see amplified benefits from AI adoption on organizational performance.</p><p>The data suggests that without solid platform foundations, AI benefits tend to remain localized rather than scaling across the organization. This isn&#8217;t necessarily about having the latest tools, but about having systems that can effectively distribute and govern AI capabilities.</p><h2>Value Stream Management as a Multiplier</h2><p>Teams practicing <a href="https://framework.scaledagile.com/value-stream-management">value stream management</a>&#8212;systematic visualization and improvement of work flow&#8212;show stronger results from AI investments. The research indicates that VSM helps ensure AI gets applied to actual system constraints rather than just individual tasks.</p><p>The mechanism appears to be that VSM provides the systems-level view needed to direct AI toward meaningful bottlenecks rather than optimizing steps that aren&#8217;t actually constraining overall performance.</p><h2>The DORA AI Capabilities Model</h2><p>The research identifies seven organizational capabilities that correlate with better AI outcomes:</p><ol><li><p>Clear and communicated AI policies</p><blockquote><p><em>An organization with a clear and communicated AI stance is one that encourages and expects AI use by its developers, supports its developers&#8217; experimentation with AI at work, and makes explicit which AI tools are permitted and the applicability of their AI policy for their staff.</em></p></blockquote></li><li><p>Healthy data ecosystems</p><blockquote><p><em>When organizations invest in creating and maintaining highquality, accessible, unified data ecosystems, they can yield even higher benefits for their organization&#8217;s performance than with AI adoption alone.</em></p></blockquote></li><li><p>AI-accessible internal data</p><blockquote><p><em>Organizations who invest time in connecting their AI tools to their internal systems may observe better outcomes than organizations who rely on the less specialized knowledge provided by generic foundational models.</em></p></blockquote></li><li><p>Strong version control practices</p><blockquote><p><em>One of the most tangible examples of this is the reliance on rollback or revert features. The ability to undo changes swiftly and without fuss is not just a convenience; it&#8217;s a critical enabler of speed and resilience</em>.</p></blockquote></li><li><p>Working in small batches</p><blockquote><p><em>Working in small batches increases reported product performance, while also decreasing perceived friction for AI-assisted teams.</em></p></blockquote></li><li><p>User-centric focus</p><blockquote><p><em>Organizations that encourage AI adoption will benefit from incorporating a rich understanding of their end users, their goals, and their feedback into their product roadmaps and strategies.</em></p></blockquote></li><li><p>Quality internal platforms</p><blockquote><p>We believe designing and maintaining quality internal development platforms is an important capability for organizations to successfully develop software in an AI-assisted environment.</p></blockquote></li></ol><p>Teams with these capabilities show stronger positive effects from AI adoption across multiple performance dimensions.</p><h2>Persistent Challenges</h2><p>Despite productivity claims, the research shows AI has minimal impact on developer burnout or workplace friction. The authors suggest these may be systemic issues that technology alone cannot address.</p><p>AI adoption continues to correlate with increased software delivery instability, though this year&#8217;s data shows improvement in software delivery throughput compared to last year&#8217;s findings. This suggests some organizational adaptation is occurring.</p><h2>Year-over-Year Changes</h2><p>Comparing 2024 to 2025 results, several relationships have shifted positively:</p><ul><li><p>AI&#8217;s impact on valuable work (negative to positive)</p></li><li><p>Software delivery throughput (negative to positive)</p></li><li><p>Product performance (neutral to positive)</p></li></ul><p>These changes suggest both tool improvement and organizational learning over the past year.</p><h2>Practical Implications</h2><p>The research suggests several practical considerations:</p><p><strong>For AI tool selection:</strong> Focus on organizational readiness factors rather than just tool capabilities.</p><p><strong>For implementation:</strong> Invest in supporting systems (platforms, data, processes) alongside AI tool procurement.</p><p><strong>For measurement:</strong> Consider team health factors beyond just delivery metrics when evaluating AI impact.</p><p><strong>For training:</strong> Emphasize critical evaluation skills and verification practices, not just tool usage.</p><p>The research positions current AI adoption as being in transition&#8212;moving from initial experimentation toward more systematic organizational integration. Success appears to depend more on implementation context than on specific AI capabilities.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://dora.dev/research/2025/dora-report/&quot;,&quot;text&quot;:&quot;Get the report&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://dora.dev/research/2025/dora-report/"><span>Get the report</span></a></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.pragmaticdx.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">If this resonated with you, don&#8217;t miss future posts. Subscribe for free, or become a paid supporter of Pragmatic Developer Experience.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p></p>]]></content:encoded></item><item><title><![CDATA[Why Ignoring Developer Frustration Is Risking Your Business]]></title><description><![CDATA[Research from Microsoft and GitHub reveals how bad tooling, interruptions, and confusing processes are quietly undermining productivity, innovation, and profits]]></description><link>https://blog.pragmaticdx.com/p/why-ignoring-developer-frustration</link><guid isPermaLink="false">https://blog.pragmaticdx.com/p/why-ignoring-developer-frustration</guid><dc:creator><![CDATA[Marcel Hauri]]></dc:creator><pubDate>Tue, 23 Sep 2025 15:02:59 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d1a9dda1-57d0-447d-bdc7-74a28c4e1250_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You know that feeling when you're trying to get something done, but every tool you touch seems designed to waste your time? Your laptop's slow, the software keeps crashing, and you spend more time fighting the system than actually working?</p><p>Well, that's the daily reality for software developers, and it's costing companies way more than anyone realized.</p><p>Here's the thing: we've all suspected that frustrated developers produce worse work, but until now, nobody had the hard numbers to prove it. <a href="https://queue.acm.org/detail.cfm?id=3639443">A study from researchers at Microsoft Research, GitHub, and DX</a> just changed that. They surveyed 219 developers and found something that should make every CEO pay attention.</p><h2>The Research That Changes Everything</h2><p>Now, before you roll your eyes at another "<em>developer happiness</em>" study, let me tell you why this one's different. The researchers didn't just ask developers how they felt. They connected those feelings to measurable business outcomes. Productivity, innovation, retention and profitability.</p><p>But here's where it gets interesting. They found that developer experience isn't just one thing you can fix with better coffee or standing desks. It breaks down into three specific areas:</p><p><strong>Flow State</strong> - Can developers actually get into deep work mode? Think about it: when was the last time you had four uninterrupted hours to tackle something complex? For many developers, that's becoming impossible. Notifications, urgent meetings, production fires. They're constantly context-switching.</p><p><strong>Feedback Loops</strong> - How fast can developers get answers and approvals? I'm talking about code reviews sitting for days, questions posted in Slack or Teams that never get answered, deployment processes that take hours to give you a simple yes or no.</p><p><strong>Cognitive Load</strong> - How much mental energy gets wasted on just figuring out how to do basic tasks? When you're using tools like Kubernetes or navigating a massive codebase without proper documentation, you're spending cognitive cycles on plumbing instead of problem-solving.</p><p>This raises an interesting point about how we think about developer productivity. Most companies track lines of code or tickets closed&#8212;but what if the real bottleneck is developers spending 60% of their time just trying to understand what they're supposed to be doing?</p><h2>The Numbers Don't Lie (But They're Complicated)</h2><p>Here's where the study gets compelling, but also where we need to be careful about oversimplifying things.</p><p>Developers with dedicated deep work time reported feeling 50% more productive. But that's <em>reported</em> productivity, not measured output. Still, when you think about the nature of software development, this makes sense. Writing code isn't like working on an assembly line where you can measure widgets per hour. It's knowledge work that requires sustained attention.</p><p>That leads to another consideration: the study found that developers who understood their codebase well were 42% more productive. Now, this could mean two things. Either clear, well-documented code makes people more productive, or more productive people tend to work at places with better codebases. It's probably both, but the causation gets somewhat murky.</p><p>The feedback loop findings are particularly interesting. Teams with fast code review turnaround saw 20% higher innovation rates. This actually aligns with what we know about creative work&#8212;<strong>momentum matters</strong>. When you have an idea and have to wait three days to get feedback, you've often moved on mentally by the time the review comes back.</p><h2>What the Study Doesn't Tell Us</h2><p>Here's the thing about research like this&#8212;it's incredibly valuable, but it has some blind spots we should acknowledge.</p><p>First, all these developers worked at companies that were already customers of <a href="https://getdx.com/">DX</a>, the developer experience platform. That's not necessarily a problem, but it does mean we're looking at organizations that were already thinking about developer experience. Would we see the same patterns at a traditional enterprise that's never considered these questions? Hard to say.</p><p>Second, the study is cross-sectional&#8212;a snapshot in time rather than tracking changes over months or years. We don't know if improving developer experience actually causes better business outcomes, or if successful companies just happen to invest more in developer experience. The researchers acknowledge this limitation, but it's worth keeping in mind when making investment decisions.</p><p>There's also the question of survivorship bias. The developers who responded to this survey might be the ones who care most about developer experience. What about the developers who've already burned out and left? Or the ones who've just accepted that work is supposed to be frustrating?</p><h2>The Real-World Reality Check</h2><p>Let's talk about what this actually looks like in practice, because it's messier than any research paper can capture.</p><p>Take a company like Spotify. They've invested heavily in developer experience with their Backstage platform (now open-sourced), which provides a unified interface for all their internal tools and services. But implementing something like that isn't just a matter of buying software&#8212;it requires organizational change, cultural buy-in, and ongoing maintenance. And even Spotify still deals with the fundamental tension between moving fast and maintaining quality.</p><p>This raises another interesting point about the study's recommendations. They suggest measuring developer experience every 3-6 months and making iterative improvements. That sounds reasonable, but anyone who's worked at a large company knows how hard it is to get consistent executive attention on something that doesn't directly generate revenue.</p><h2>The Trade-offs Nobody Talks About</h2><p>Here's where things get really complex, and where most discussions about developer experience get oversimplified.</p><p>Improving developer experience often means accepting trade-offs. Want to reduce cognitive load by standardizing on one programming language across your organization? Great, but you might lose the ability to use the best tool for specific problems. Want to implement comprehensive code review processes to improve feedback loops? Fantastic, but you'll slow down deployment speed.</p><p>Consider the current AI coding assistant trend. Tools like GitHub Copilot or Cursor can dramatically improve developer productivity by reducing the time spent on boilerplate code. But they also introduce new forms of cognitive load&#8212;developers need to learn to prompt effectively, review AI-generated code carefully, and sometimes fight against suggestions that are subtly wrong.</p><p>That leads to another consideration: the developer experience that works for senior engineers might be completely different from what works for junior developers. A powerful, flexible tool might be exactly what an experienced developer needs to get into flow state, but it could overwhelm someone who's still learning the basics.</p><h2>What This Actually Means for Your Business</h2><p>Now, if you're a business leader reading this, you're probably wondering: "<em>Okay, but what do I actually do with this information?</em>"</p><p>The study provides a roadmap, but let's be honest about what it really entails. They recommend starting by measuring your current developer experience. Sounds simple, right? But think about what that actually means. You need to survey developers in a way that gets honest feedback (not always easy in corporate environments), analyze the results, and then figure out what changes are actually feasible given your budget, timeline, and organizational constraints.</p><p>The researchers found that deep work and engaging tasks had the biggest impact on developer outcomes. But creating space for deep work in most organizations means saying no to other things&#8212;fewer status meetings, longer deployment cycles, maybe even accepting that some urgent requests will have to wait.</p><p>Here's what's particularly tricky: many of the factors that improve developer experience also benefit from economies of scale. It's easier to invest in sophisticated tooling and processes when you have hundreds of developers. If you're a smaller company, you might need to be more strategic about which improvements will give you the biggest bang for your buck.</p><h2>The Bigger Picture </h2><p>There's a broader context here that's worth considering. This research comes at a time when the software industry is going through some major shifts.</p><p>First, there's the economic climate. With interest rates higher and growth harder to come by, companies are scrutinizing every investment more carefully. That makes the business case for developer experience both more important (you need data to justify spending) and more challenging (there's less room for experimentation).</p><p>Second, there's the generative AI revolution. Tools like ChatGPT and Claude are changing how developers work, but they're also changing what kinds of developer experience improvements matter most. If AI can handle more routine coding tasks, does that make deep work time more or less important? If AI can provide instant answers to technical questions, does that change how we think about feedback loops?</p><p>Third, there's the remote work reality. The study doesn't specifically address remote vs. in-person developer experience, but anyone managing distributed teams knows it adds complexity. How do you create flow state when developers are juggling home distractions? How do you maintain fast feedback loops across time zones?</p><h2>Where Do We Go From Here?</h2><p>The most valuable thing about this research isn't the specific numbers&#8212;it's that it gives us a framework for thinking systematically about developer experience. Instead of just throwing money at the problem or hoping that new tools will magically fix everything, we can start measuring and iterating.</p><p>But let's be realistic about the challenges ahead. Improving developer experience requires sustained investment and organizational commitment. It means making some teams slower in the short term to make the whole organization faster in the long term. It means accepting that some of your best developers might leave if you don't address these issues, but also recognizing that fixing everything at once isn't possible.</p><p>The companies that figure this out&#8212;<em>that find the right balance between developer happiness and business constraints</em>&#8212;are going to have a real competitive advantage. Not because happy developers automatically write better code, but because organizations that can attract and retain top technical talent while maintaining high productivity are going to win in an increasingly software-driven world.</p><p>That's the real insight from this research. It's not just about making developers happy&#8212;it's about creating sustainable competitive advantage through better human capital management.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.pragmaticdx.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">If this resonated with you, don&#8217;t miss future posts. Subscribe for free, or become a paid supporter of Pragmatic Developer Experience.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Problem of Interruptions]]></title><description><![CDATA[We like to think multitasking makes us efficient, but the opposite is true. Every interruption fractures attention, derails complex thinking, and turns deep work into shallow busywork.]]></description><link>https://blog.pragmaticdx.com/p/the-problem-of-interruptions</link><guid isPermaLink="false">https://blog.pragmaticdx.com/p/the-problem-of-interruptions</guid><dc:creator><![CDATA[Marcel Hauri]]></dc:creator><pubDate>Thu, 18 Sep 2025 11:41:31 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3a677c52-44b7-4624-b12a-437ca9c4aa57_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A few days back, I was finally making progress on a nasty bug. You know the kind - where you've got mental models of three different systems loaded in your head, and you can feel yourself getting close to the breakthrough. Then... ping. Teams notification.</p><p>"<em>Hey, quick question about the login flow.</em>"</p><p>I tried to ignore it. Kept debugging. Another ping.</p><p>"<em>When you get a sec.</em>"</p><p>Then another. "<em>No rush, but...</em>"</p><p>By the time I caved and answered (because apparently "<em>no rush</em>" means "<em>interrupt whatever you're doing right now</em>"), I'd lost everything. That careful mental scaffolding I'd built up over two hours? Gone. It took me another hour just to get back to where I was.</p><p>And you know what the "<em>quick question</em>" was? Something they could have found by reading the Documentation.</p><p>This is the reality of modern software development. We've created workplaces that are fundamentally hostile to the kind of deep thinking that actually produces good software. Then we wonder why everything feels rushed, why technical debt keeps piling up, and why our best engineers seem frustrated all the time.</p><h2>The Multitasking Myth</h2><p>Let me be blunt here: you can't multitask. Neither can I. Nobody can.</p><p>What we call multitasking is really just context switching really fast, and every switch has a cost. It's like running too many applications on a computer with insufficient RAM - everything runs slower because the system keeps swapping things in and out of memory.</p><p>I learned this the hard way a while ago during a particularly hellish week where I was trying to:</p><ul><li><p>Fix a critical bug</p></li><li><p>Review code for three different features</p></li><li><p>Help junior developers work through blockers</p></li><li><p>Answer Teams messages as they came in</p></li></ul><p>By the end of the week, I felt like I'd been working constantly, but couldn't point to a single meaningful thing I'd actually finished. Everything was half-done, partially reviewed, or "almost fixed."</p><p>That's when it hit me: trying to do everything means accomplishing nothing.</p><h2>Why Everything Feels Urgent (But Isn't)</h2><p>We've created this weird culture where everything feels like an emergency. A question about next week's deployment becomes "<em>urgent</em>." A discussion about button colors becomes "<em>blocking</em>." Someone's inability to find documentation becomes your immediate problem.</p><p>The thing is, most urgent things aren't actually urgent. They're just loud.</p><p>Last month alone, I got these "<em>urgent</em>" interruptions:</p><ul><li><p>A question about creating users that were literally documented in the command itself</p></li><li><p>A request to review a PR for a feature that wasn't needed for another two weeks</p></li><li><p>A "<em>quick sync</em>" about a project nobody had touched in a month</p></li></ul><p>None of these were urgent. But in our always-on culture, the squeaky wheel gets the grease, and the person doing focused work gets... interrupted.</p><h2>What Deep Work Actually Looks Like</h2><p>Real programming - the kind that actually moves things forward - happens in long, uninterrupted blocks. I'm talking about the kind of <a href="https://en.wikipedia.org/wiki/Flow_(psychology)">flow state</a> where you can hold an entire system in your head and see connections that aren't obvious when you're constantly context-switching.</p><p>My best debugging sessions have lasted 3-4 hours straight. Same with designing new features or refactoring legacy code. You simply can't do this work in 15-minute chunks between meetings.</p><p>But <a href="https://calnewport.com/deep-work-rules-for-focused-success-in-a-distracted-world/">deep work</a> has become almost countercultural. If you're not immediately responsive, people assume you're not working hard. If you block time on your calendar for focused work, someone will schedule a meeting over it because "<em>it doesn't look like you're busy."</em></p><p>The open office trend made this even worse. A few years back, I worked at a place where you could hear every phone call, every lunch discussion, and every time someone got excited about their Food delivery. The theory was collaboration. The reality was that everyone wore headphones and messaged people sitting ten feet away.</p><h2>The Worst Offenders</h2><p><strong>"Got a Minute?" (Spoiler: It's Never a Minute)</strong></p><p>This might be my biggest pet peeve. Someone walks up with "got a minute?" but what they really mean is "I need you to drop everything and think about my problem right now."</p><p>The polite answer is always "sure," but the honest answer should be "that depends - what do you need, and can it wait until I finish this thought?"</p><p><strong>The "Emergency" Meeting Culture</strong></p><p>Everything needs a meeting, and every meeting is urgent. I've been in emergency meetings to plan other meetings. I've sat through hour-long discussions that could have been resolved with a 30-second Teams message.</p><p>My favorite was an meeting about improving meeting efficiency. The irony was apparently lost on everyone.</p><h2>What Actually Works</h2><p>After years of frustration, I've found a few things that genuinely work, the key is communicating these expectations upfront and sticking to them.</p>
      <p>
          <a href="https://blog.pragmaticdx.com/p/the-problem-of-interruptions">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Building Developer-First APIs]]></title><description><![CDATA[Behind every great API is a team that remembered what it feels like to be the developer on the other side of the screen.]]></description><link>https://blog.pragmaticdx.com/p/building-developer-first-apis</link><guid isPermaLink="false">https://blog.pragmaticdx.com/p/building-developer-first-apis</guid><dc:creator><![CDATA[Marcel Hauri]]></dc:creator><pubDate>Wed, 17 Sep 2025 22:10:01 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b84e92ab-42c1-4b70-b914-9d028710287d_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You know that moment when you're trying to integrate with a new API and everything just... works? The documentation makes sense, the endpoints do what you expect, and you're up and running in minutes instead of hours. That's the magic of a developer-first API, and it's not an accident.</p><p>I've spent years on both sides of this equation - building APIs that developers love and wrestling with ones that make me want to throw my laptop out the window. The difference isn't usually in the underlying technology. It's in whether the team building the API actually thought about the poor developer who'd have to use it at two o&#8217;clock in the morning trying to meet a deadline.</p><h2>What Does "Developer-First" Actually Mean?</h2><p>Here's the thing - most companies say they're developer-first, but then they build APIs like they're checking boxes on a technical specification. Real developer-first thinking means you're designing for the human being who's going to integrate with your API, not just the computer that's going to execute the code.</p><p>I remember working with a payments API that had perfect uptime and blazing fast response times, but their webhook system was a nightmare. No retry logic, cryptic error messages, and documentation that hadn't been updated in two years. Technically solid? Sure. Developer-friendly? Absolutely not.</p><p>The developer-first approach means asking yourself uncomfortable questions: Would I want to integrate with this API on a Friday afternoon? Can a junior developer figure this out without asking for help? Does this feel like it was built by people who've actually written code that calls APIs?</p><h2>The Foundation: Design That Makes Sense</h2><h3>Keep It Predictable (Your Future Self Will Thank You)</h3><p>Consistency isn't just a nice-to-have - it's the difference between a developer confidently using your API and constantly second-guessing every request they make. When I see an API where some endpoints use snake_case and others use camelCase, I immediately know I'm in for a rough time.</p><p>Pick your conventions early and stick to them religiously. If you're using REST, actually follow <a href="https://restfulapi.net/">REST principles</a>. Don't create a <code>GET /users/delete/123</code> endpoint because it's "easier" than implementing proper <code>DELETE</code> handling. That's not easier for anyone except maybe your backend team, and it's definitely not easier for the developers trying to understand your API.</p><p>I've seen teams spend weeks debating whether to use <code>user_id</code> or <code>userId</code> in their API. Honestly? Pick one and move on. The consistency matters way more than the specific choice.</p><h3>Structure Resources Like a Human Would Think</h3><p>Your API structure should match how people actually think about your domain. If you're building an e-commerce API, developers expect to find products, orders, and customers - not abstract entities with names only your database architect understands.</p><p>Here's a real example that drove me crazy: an API where you had to call <code>/entities?type=customer</code> instead of just <code>/customers</code>. Technically it might have made sense from a database perspective, but it made the API harder to discover and use. Don't make developers learn your internal data model just to get basic functionality working.</p><p>Nested resources are great when they reflect real relationships. <code>/stores/123/products</code> makes perfect sense. But don't go overboard with nesting - <code>/companies/456/stores/123/departments/789/products/101</code> is just painful to work with.</p><h3>Documentation That Doesn't Suck</h3><p>Let me be blunt: most API documentation is terrible. It's either auto-generated technical specifications that tell you nothing about actual usage, or it's so high-level that you can't figure out how to make a simple request.</p><p>Good documentation feels like it was written by someone who's actually used the API. It includes the edge cases, the gotchas, the things that aren't obvious from the endpoint definitions. When <a href="https://docs.stripe.com/webhooks/process-undelivered-events">Stripe explains their webhook retry logic</a>, they don't just list the technical specifications - they explain why they designed it that way and how to handle the edge cases in your code.</p><p>Interactive documentation is huge. Being able to make real API calls right from the docs saves so much time. But make sure your examples are realistic - don't use <code>foo</code> and <code>bar</code> as example values when you could use actual product names or realistic user data.</p><h3>Error Messages That Help Instead of Confuse</h3><p>Nothing kills developer productivity like cryptic error messages. "Bad Request" with a 400 status code tells me absolutely nothing useful. What was bad about it? Which field was wrong? How do I fix it?</p><p>Compare these two error responses:</p><pre><code><code>{
  "error": "Bad Request"
}</code></code></pre><p>versus:</p><pre><code><code>{
  "error": {
    "code": "VALIDATION_FAILED",
    "message": "Email address is required and must be valid",
    "field": "email",
    "provided_value": "not-an-email"
  }
}</code></code></pre><p>The second one tells me exactly what's wrong and how to fix it. That's the difference between a frustrated developer and a happy one.</p><h2>Security That Works for Humans</h2><p>Security is non-negotiable, but it doesn't have to be a nightmare to implement. OAuth 2.0 is the standard for a reason - it's well-understood, and there are libraries for every language. Don't roll your own authentication scheme unless you have a really compelling reason.</p><p>API keys are fine for server-to-server communication, but make them easy to manage. Developers need to be able to rotate keys, revoke access, and monitor usage. And please, for the love of all that's holy, don't put sensitive operations behind just a simple API key with no additional verification.</p><p>Rate limiting is important, but be transparent about it. Include rate limit headers in your responses so developers can build proper backoff logic. And consider different rate limits for different types of operations - reading data and writing data shouldn't have the same limits.</p><h2>Versioning Without the Headaches</h2><p>API versioning is one of those topics that starts flame wars in developer communities, but here's what actually matters: predictability and reasonable migration paths.</p><p>URL versioning (<code>/v1/users</code>, <code>/v2/users</code>) is the most explicit and makes it clear which version you're calling. Header-based versioning is cleaner but less obvious. Pick one approach and stick with it.</p><p>The key is giving developers time to migrate. Don't deprecate v1 the day you release v2. Give at least six months' notice, provide clear migration documentation, and if possible, run both versions simultaneously for a while.</p><p>When you do make breaking changes, make them count. Don't release v2 just to change a field name - batch your breaking changes so developers only have to deal with migration headaches occasionally, not constantly.</p><h2>Performance That Doesn't Surprise Anyone</h2><p>Fast APIs are great, but predictable performance is more important. If your API usually responds in 100ms but sometimes takes 10 seconds, that's worse than an API that consistently takes 500ms.</p><p>Implement proper pagination for any endpoint that could return large datasets. Don't make developers guess whether they need to paginate - if there's any chance a response could be large, paginate it from the start.</p><p>Caching headers are your friend. If data doesn't change often, tell clients they can cache it. This reduces load on your servers and makes the developer experience faster.</p><h2>The Small Details That Matter</h2><p>Field naming might seem trivial, but it adds up. Use clear, descriptive names. <code>created_at</code> is better than <code>created</code> which is better than <code>ct</code>. Don't make developers guess what your fields contain.</p><p>Boolean fields should be obviously boolean. <code>is_active</code>, <code>has_children</code>, <code>can_edit</code> - these names make it clear what type of value to expect.</p><p>For timestamps, use ISO 8601 format with timezone information. Yes, it's verbose, but it's unambiguous. <code>2023-10-15T14:30:00.000Z</code> tells me everything I need to know.</p><h2>Building Community and Getting Feedback</h2><p>The best developer-first APIs evolve based on real developer feedback. Set up channels for developers to ask questions, report issues, and suggest improvements. GitHub issues, Discord servers, dedicated forums - pick whatever works for your community.</p><p>Pay attention to the questions developers ask. If multiple people are confused about the same thing, that's not a training problem - it's a design problem. Use support questions as product feedback.</p><p>Consider creating SDKs for popular languages, but don't let them become a crutch for poor API design. A good API should be usable with just HTTP requests and good documentation.</p><h2>Testing From the Outside In</h2><p>Test your API like a developer would use it, not like you built it. Write integration tests that make real HTTP requests. Try to integrate with your own API using different programming languages and frameworks.</p><p>Set up monitoring that tracks developer-focused metrics, not just system health. Track error rates by endpoint, measure time-to-first-successful-request for new developers, and monitor documentation page views to understand what developers are struggling with.</p><h2>The Long Game</h2><p>Building a developer-first API isn't a one-time effort - it's an ongoing commitment. Technology changes, developer expectations evolve, and your own product grows. The APIs that developers love long-term are the ones that adapt while maintaining their core principles of clarity, consistency, and respect for the developer experience.</p><p>Remember, every developer who has a good experience with your API becomes a potential advocate. They'll recommend you in architecture discussions, write blog posts about successful integrations, and contribute to the community around your platform. That organic growth is worth far more than any marketing campaign.</p><p><strong>The goal isn't just to build an API that works - it's to build an API that developers actually enjoy working with.</strong> When you achieve that, you'll know it. The support tickets decrease, the community grows, and developers start building things with your API that even you didn't expect.</p><p>That's when you know you've built something truly developer-first.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.pragmaticdx.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">To receive new posts directly in your inbox and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item></channel></rss>