Why AI Adoption Is Universal but Its Benefits Are Not
According to DORA, AI has become standard practice across software teams worldwide. But without organizational foundations like strong platforms and workflows, software delivery remains fragile.
The 2025 DORA State of AI-assisted Software Development report reveals a notable disconnect in AI adoption outcomes. While 95% of developers now use AI tools at work only 70% trust the code these tools generate. This gap points to implementation challenges that many organizations haven’t fully addressed.
The research, based on responses from nearly 5,000 technology professionals worldwide, shows that AI’s impact varies significantly depending on organizational context rather than tool sophistication alone.
Why this matters: Understanding what determines AI success can help organizations make better investment decisions and avoid common implementation pitfalls.
The Trust Gap in AI Usage
The data shows widespread AI adoption alongside measured skepticism. While most developers report productivity improvements from AI tools, 30% express little to no trust in AI-generated output. This suggests a “trust but verify” approach that the researchers interpret as potentially healthy rather than problematic.
During interviews, developers compared this skepticism to how they approach solutions found on Stack Overflow—useful resources that still require validation. The research indicates this cautious approach may be a sign of mature adoption rather than a barrier to it.
However, this trust gap does highlight an important consideration: organizations need to invest in training focused on critical evaluation and validation of AI-generated work, not just tool usage.
Seven Team Performance Profiles
The report identifies seven distinct team archetypes based on performance, stability, and well-being metrics:
Foundational challenges (10%): Multiple performance gaps
Legacy bottleneck (11%): Reactive work due to unstable systems
Constrained by process (17%): Stable systems, inefficient processes
High impact, low cadence (7%): Strong results, high instability
Stable and methodical (15%): Quality work at deliberate pace
Pragmatic performers (20%): Strong delivery with average well-being
Harmonious high-achievers (20%): High performance with low burnout
Our analysis revealed seven distinct team archetypes, ranging from those excelling in healthy, sustainable environments (Harmonious high-achievers) to those trapped by technical debt (Legacy bottleneck) or inefficient processes (Constrained by process).
This framework suggests that traditional software delivery metrics alone don’t capture the full picture of team health and effectiveness.
The Platform Foundation Effect
One of the clearer findings involves platform engineering. With 90% adoption across organizations, the research shows a strong correlation between platform quality and AI effectiveness. Organizations with higher-quality internal platforms see amplified benefits from AI adoption on organizational performance.
The data suggests that without solid platform foundations, AI benefits tend to remain localized rather than scaling across the organization. This isn’t necessarily about having the latest tools, but about having systems that can effectively distribute and govern AI capabilities.
Value Stream Management as a Multiplier
Teams practicing value stream management—systematic visualization and improvement of work flow—show stronger results from AI investments. The research indicates that VSM helps ensure AI gets applied to actual system constraints rather than just individual tasks.
The mechanism appears to be that VSM provides the systems-level view needed to direct AI toward meaningful bottlenecks rather than optimizing steps that aren’t actually constraining overall performance.
The DORA AI Capabilities Model
The research identifies seven organizational capabilities that correlate with better AI outcomes:
Clear and communicated AI policies
An organization with a clear and communicated AI stance is one that encourages and expects AI use by its developers, supports its developers’ experimentation with AI at work, and makes explicit which AI tools are permitted and the applicability of their AI policy for their staff.
Healthy data ecosystems
When organizations invest in creating and maintaining highquality, accessible, unified data ecosystems, they can yield even higher benefits for their organization’s performance than with AI adoption alone.
AI-accessible internal data
Organizations who invest time in connecting their AI tools to their internal systems may observe better outcomes than organizations who rely on the less specialized knowledge provided by generic foundational models.
Strong version control practices
One of the most tangible examples of this is the reliance on rollback or revert features. The ability to undo changes swiftly and without fuss is not just a convenience; it’s a critical enabler of speed and resilience.
Working in small batches
Working in small batches increases reported product performance, while also decreasing perceived friction for AI-assisted teams.
User-centric focus
Organizations that encourage AI adoption will benefit from incorporating a rich understanding of their end users, their goals, and their feedback into their product roadmaps and strategies.
Quality internal platforms
We believe designing and maintaining quality internal development platforms is an important capability for organizations to successfully develop software in an AI-assisted environment.
Teams with these capabilities show stronger positive effects from AI adoption across multiple performance dimensions.
Persistent Challenges
Despite productivity claims, the research shows AI has minimal impact on developer burnout or workplace friction. The authors suggest these may be systemic issues that technology alone cannot address.
AI adoption continues to correlate with increased software delivery instability, though this year’s data shows improvement in software delivery throughput compared to last year’s findings. This suggests some organizational adaptation is occurring.
Year-over-Year Changes
Comparing 2024 to 2025 results, several relationships have shifted positively:
AI’s impact on valuable work (negative to positive)
Software delivery throughput (negative to positive)
Product performance (neutral to positive)
These changes suggest both tool improvement and organizational learning over the past year.
Practical Implications
The research suggests several practical considerations:
For AI tool selection: Focus on organizational readiness factors rather than just tool capabilities.
For implementation: Invest in supporting systems (platforms, data, processes) alongside AI tool procurement.
For measurement: Consider team health factors beyond just delivery metrics when evaluating AI impact.
For training: Emphasize critical evaluation skills and verification practices, not just tool usage.
The research positions current AI adoption as being in transition—moving from initial experimentation toward more systematic organizational integration. Success appears to depend more on implementation context than on specific AI capabilities.
Great technical writing! I find your research fascinating. I'm interested in following and finding out how AI adaptation moves from trust but verify to trust. History proves this is the evolution process of technology, as with the first IBM computers at NASA and the initial smartphones, as well as the testing of self-driving cars now. Eventually, AI will be another tool that we harness to evolve in research and productivity.